00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v23.11" build number 938 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3605 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.101 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.102 The recommended git tool is: git 00:00:00.102 using credential 00000000-0000-0000-0000-000000000002 00:00:00.106 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.137 Fetching changes from the remote Git repository 00:00:00.139 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.161 Using shallow fetch with depth 1 00:00:00.161 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.161 > git --version # timeout=10 00:00:00.182 > git --version # 'git version 2.39.2' 00:00:00.182 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.196 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.196 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.031 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.041 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.051 Checking out Revision 44e7d6069a399ee2647233b387d68a938882e7b7 (FETCH_HEAD) 00:00:06.051 > git config core.sparsecheckout # timeout=10 00:00:06.061 > git read-tree -mu HEAD # timeout=10 00:00:06.075 > git checkout -f 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=5 00:00:06.092 Commit message: "scripts/bmc: Rework Get NIC Info cmd parser" 00:00:06.092 > git rev-list --no-walk 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=10 00:00:06.205 [Pipeline] Start of Pipeline 00:00:06.215 [Pipeline] library 00:00:06.217 Loading library shm_lib@master 00:00:06.217 Library shm_lib@master is cached. Copying from home. 00:00:06.230 [Pipeline] node 00:00:06.248 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:06.250 [Pipeline] { 00:00:06.257 [Pipeline] catchError 00:00:06.258 [Pipeline] { 00:00:06.266 [Pipeline] wrap 00:00:06.273 [Pipeline] { 00:00:06.278 [Pipeline] stage 00:00:06.279 [Pipeline] { (Prologue) 00:00:06.293 [Pipeline] echo 00:00:06.295 Node: VM-host-SM0 00:00:06.300 [Pipeline] cleanWs 00:00:06.311 [WS-CLEANUP] Deleting project workspace... 00:00:06.311 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.316 [WS-CLEANUP] done 00:00:06.508 [Pipeline] setCustomBuildProperty 00:00:06.575 [Pipeline] httpRequest 00:00:06.943 [Pipeline] echo 00:00:06.944 Sorcerer 10.211.164.101 is alive 00:00:06.950 [Pipeline] retry 00:00:06.952 [Pipeline] { 00:00:06.962 [Pipeline] httpRequest 00:00:06.967 HttpMethod: GET 00:00:06.968 URL: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:06.968 Sending request to url: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:06.983 Response Code: HTTP/1.1 200 OK 00:00:06.983 Success: Status code 200 is in the accepted range: 200,404 00:00:06.984 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:10.648 [Pipeline] } 00:00:10.665 [Pipeline] // retry 00:00:10.673 [Pipeline] sh 00:00:10.957 + tar --no-same-owner -xf jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:10.973 [Pipeline] httpRequest 00:00:11.389 [Pipeline] echo 00:00:11.391 Sorcerer 10.211.164.101 is alive 00:00:11.401 [Pipeline] retry 00:00:11.403 [Pipeline] { 00:00:11.417 [Pipeline] httpRequest 00:00:11.422 HttpMethod: GET 00:00:11.422 URL: http://10.211.164.101/packages/spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:00:11.423 Sending request to url: http://10.211.164.101/packages/spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:00:11.444 Response Code: HTTP/1.1 200 OK 00:00:11.445 Success: Status code 200 is in the accepted range: 200,404 00:00:11.445 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:00:57.435 [Pipeline] } 00:00:57.454 [Pipeline] // retry 00:00:57.462 [Pipeline] sh 00:00:57.749 + tar --no-same-owner -xf spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:01:00.296 [Pipeline] sh 00:01:00.579 + git -C spdk log --oneline -n5 00:01:00.579 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:00.579 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:00.579 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:00.579 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:00.579 9469ea403 nvme/fio_plugin: add trim support 00:01:00.599 [Pipeline] withCredentials 00:01:00.611 > git --version # timeout=10 00:01:00.624 > git --version # 'git version 2.39.2' 00:01:00.642 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:00.644 [Pipeline] { 00:01:00.654 [Pipeline] retry 00:01:00.656 [Pipeline] { 00:01:00.672 [Pipeline] sh 00:01:00.954 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:00.967 [Pipeline] } 00:01:00.984 [Pipeline] // retry 00:01:00.989 [Pipeline] } 00:01:01.005 [Pipeline] // withCredentials 00:01:01.016 [Pipeline] httpRequest 00:01:01.457 [Pipeline] echo 00:01:01.459 Sorcerer 10.211.164.101 is alive 00:01:01.469 [Pipeline] retry 00:01:01.471 [Pipeline] { 00:01:01.485 [Pipeline] httpRequest 00:01:01.491 HttpMethod: GET 00:01:01.491 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:01.492 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:01.493 Response Code: HTTP/1.1 200 OK 00:01:01.494 Success: Status code 200 is in the accepted range: 200,404 00:01:01.494 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:08.318 [Pipeline] } 00:01:08.334 [Pipeline] // retry 00:01:08.341 [Pipeline] sh 00:01:08.622 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:10.010 [Pipeline] sh 00:01:10.292 + git -C dpdk log --oneline -n5 00:01:10.292 eeb0605f11 version: 23.11.0 00:01:10.292 238778122a doc: update release notes for 23.11 00:01:10.292 46aa6b3cfc doc: fix description of RSS features 00:01:10.292 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:10.292 7e421ae345 devtools: support skipping forbid rule check 00:01:10.310 [Pipeline] writeFile 00:01:10.324 [Pipeline] sh 00:01:10.607 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:10.619 [Pipeline] sh 00:01:10.901 + cat autorun-spdk.conf 00:01:10.901 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.901 SPDK_TEST_NVMF=1 00:01:10.901 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:10.901 SPDK_TEST_USDT=1 00:01:10.901 SPDK_RUN_UBSAN=1 00:01:10.901 SPDK_TEST_NVMF_MDNS=1 00:01:10.901 NET_TYPE=virt 00:01:10.901 SPDK_JSONRPC_GO_CLIENT=1 00:01:10.901 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:10.901 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:10.901 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:10.908 RUN_NIGHTLY=1 00:01:10.910 [Pipeline] } 00:01:10.923 [Pipeline] // stage 00:01:10.938 [Pipeline] stage 00:01:10.941 [Pipeline] { (Run VM) 00:01:10.953 [Pipeline] sh 00:01:11.234 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:11.234 + echo 'Start stage prepare_nvme.sh' 00:01:11.234 Start stage prepare_nvme.sh 00:01:11.234 + [[ -n 2 ]] 00:01:11.234 + disk_prefix=ex2 00:01:11.234 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:01:11.234 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:01:11.234 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:01:11.234 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.234 ++ SPDK_TEST_NVMF=1 00:01:11.234 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:11.234 ++ SPDK_TEST_USDT=1 00:01:11.234 ++ SPDK_RUN_UBSAN=1 00:01:11.234 ++ SPDK_TEST_NVMF_MDNS=1 00:01:11.234 ++ NET_TYPE=virt 00:01:11.234 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:11.234 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:11.234 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:11.234 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:11.234 ++ RUN_NIGHTLY=1 00:01:11.234 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:11.234 + nvme_files=() 00:01:11.234 + declare -A nvme_files 00:01:11.234 + backend_dir=/var/lib/libvirt/images/backends 00:01:11.234 + nvme_files['nvme.img']=5G 00:01:11.234 + nvme_files['nvme-cmb.img']=5G 00:01:11.234 + nvme_files['nvme-multi0.img']=4G 00:01:11.234 + nvme_files['nvme-multi1.img']=4G 00:01:11.234 + nvme_files['nvme-multi2.img']=4G 00:01:11.234 + nvme_files['nvme-openstack.img']=8G 00:01:11.234 + nvme_files['nvme-zns.img']=5G 00:01:11.234 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:11.234 + (( SPDK_TEST_FTL == 1 )) 00:01:11.234 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:11.234 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:11.234 + for nvme in "${!nvme_files[@]}" 00:01:11.235 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:11.235 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:11.235 + for nvme in "${!nvme_files[@]}" 00:01:11.235 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:11.235 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:11.235 + for nvme in "${!nvme_files[@]}" 00:01:11.235 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:11.235 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:11.235 + for nvme in "${!nvme_files[@]}" 00:01:11.235 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:11.235 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:11.235 + for nvme in "${!nvme_files[@]}" 00:01:11.235 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:11.235 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:11.235 + for nvme in "${!nvme_files[@]}" 00:01:11.235 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:11.493 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:11.493 + for nvme in "${!nvme_files[@]}" 00:01:11.493 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:11.493 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:11.493 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:11.493 + echo 'End stage prepare_nvme.sh' 00:01:11.493 End stage prepare_nvme.sh 00:01:11.502 [Pipeline] sh 00:01:11.777 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:11.777 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora39 00:01:11.777 00:01:11.777 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:01:11.777 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:01:11.777 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:11.777 HELP=0 00:01:11.777 DRY_RUN=0 00:01:11.777 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:01:11.777 NVME_DISKS_TYPE=nvme,nvme, 00:01:11.777 NVME_AUTO_CREATE=0 00:01:11.777 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:01:11.777 NVME_CMB=,, 00:01:11.777 NVME_PMR=,, 00:01:11.777 NVME_ZNS=,, 00:01:11.777 NVME_MS=,, 00:01:11.777 NVME_FDP=,, 00:01:11.777 SPDK_VAGRANT_DISTRO=fedora39 00:01:11.777 SPDK_VAGRANT_VMCPU=10 00:01:11.777 SPDK_VAGRANT_VMRAM=12288 00:01:11.777 SPDK_VAGRANT_PROVIDER=libvirt 00:01:11.777 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:11.777 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:11.777 SPDK_OPENSTACK_NETWORK=0 00:01:11.777 VAGRANT_PACKAGE_BOX=0 00:01:11.777 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:11.777 FORCE_DISTRO=true 00:01:11.777 VAGRANT_BOX_VERSION= 00:01:11.777 EXTRA_VAGRANTFILES= 00:01:11.777 NIC_MODEL=e1000 00:01:11.777 00:01:11.777 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt' 00:01:11.777 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:14.310 Bringing machine 'default' up with 'libvirt' provider... 00:01:14.881 ==> default: Creating image (snapshot of base box volume). 00:01:15.140 ==> default: Creating domain with the following settings... 00:01:15.140 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730704036_138693b08f358618993c 00:01:15.140 ==> default: -- Domain type: kvm 00:01:15.140 ==> default: -- Cpus: 10 00:01:15.140 ==> default: -- Feature: acpi 00:01:15.140 ==> default: -- Feature: apic 00:01:15.140 ==> default: -- Feature: pae 00:01:15.140 ==> default: -- Memory: 12288M 00:01:15.140 ==> default: -- Memory Backing: hugepages: 00:01:15.140 ==> default: -- Management MAC: 00:01:15.140 ==> default: -- Loader: 00:01:15.140 ==> default: -- Nvram: 00:01:15.140 ==> default: -- Base box: spdk/fedora39 00:01:15.140 ==> default: -- Storage pool: default 00:01:15.140 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730704036_138693b08f358618993c.img (20G) 00:01:15.140 ==> default: -- Volume Cache: default 00:01:15.140 ==> default: -- Kernel: 00:01:15.140 ==> default: -- Initrd: 00:01:15.140 ==> default: -- Graphics Type: vnc 00:01:15.140 ==> default: -- Graphics Port: -1 00:01:15.140 ==> default: -- Graphics IP: 127.0.0.1 00:01:15.140 ==> default: -- Graphics Password: Not defined 00:01:15.140 ==> default: -- Video Type: cirrus 00:01:15.140 ==> default: -- Video VRAM: 9216 00:01:15.140 ==> default: -- Sound Type: 00:01:15.140 ==> default: -- Keymap: en-us 00:01:15.140 ==> default: -- TPM Path: 00:01:15.140 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:15.140 ==> default: -- Command line args: 00:01:15.140 ==> default: -> value=-device, 00:01:15.140 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:15.140 ==> default: -> value=-drive, 00:01:15.140 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:01:15.140 ==> default: -> value=-device, 00:01:15.140 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.140 ==> default: -> value=-device, 00:01:15.140 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:15.140 ==> default: -> value=-drive, 00:01:15.140 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:15.140 ==> default: -> value=-device, 00:01:15.140 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.140 ==> default: -> value=-drive, 00:01:15.140 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:15.140 ==> default: -> value=-device, 00:01:15.140 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.140 ==> default: -> value=-drive, 00:01:15.140 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:15.140 ==> default: -> value=-device, 00:01:15.140 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.140 ==> default: Creating shared folders metadata... 00:01:15.399 ==> default: Starting domain. 00:01:17.303 ==> default: Waiting for domain to get an IP address... 00:01:35.389 ==> default: Waiting for SSH to become available... 00:01:35.389 ==> default: Configuring and enabling network interfaces... 00:01:38.673 default: SSH address: 192.168.121.217:22 00:01:38.673 default: SSH username: vagrant 00:01:38.673 default: SSH auth method: private key 00:01:40.576 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:48.687 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:53.952 ==> default: Mounting SSHFS shared folder... 00:01:55.327 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:55.327 ==> default: Checking Mount.. 00:01:56.702 ==> default: Folder Successfully Mounted! 00:01:56.702 ==> default: Running provisioner: file... 00:01:57.688 default: ~/.gitconfig => .gitconfig 00:01:57.946 00:01:57.946 SUCCESS! 00:01:57.946 00:01:57.946 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:57.946 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:57.946 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:57.946 00:01:57.959 [Pipeline] } 00:01:57.970 [Pipeline] // stage 00:01:57.978 [Pipeline] dir 00:01:57.979 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt 00:01:57.980 [Pipeline] { 00:01:57.991 [Pipeline] catchError 00:01:57.992 [Pipeline] { 00:01:58.003 [Pipeline] sh 00:01:58.281 + vagrant ssh-config --host vagrant 00:01:58.281 + sed -ne /^Host/,$p 00:01:58.281 + tee ssh_conf 00:02:00.812 Host vagrant 00:02:00.812 HostName 192.168.121.217 00:02:00.812 User vagrant 00:02:00.812 Port 22 00:02:00.812 UserKnownHostsFile /dev/null 00:02:00.812 StrictHostKeyChecking no 00:02:00.812 PasswordAuthentication no 00:02:00.812 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:00.812 IdentitiesOnly yes 00:02:00.812 LogLevel FATAL 00:02:00.812 ForwardAgent yes 00:02:00.812 ForwardX11 yes 00:02:00.812 00:02:00.826 [Pipeline] withEnv 00:02:00.828 [Pipeline] { 00:02:00.843 [Pipeline] sh 00:02:01.122 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:01.122 source /etc/os-release 00:02:01.122 [[ -e /image.version ]] && img=$(< /image.version) 00:02:01.122 # Minimal, systemd-like check. 00:02:01.122 if [[ -e /.dockerenv ]]; then 00:02:01.122 # Clear garbage from the node's name: 00:02:01.122 # agt-er_autotest_547-896 -> autotest_547-896 00:02:01.122 # $HOSTNAME is the actual container id 00:02:01.122 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:01.122 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:01.122 # We can assume this is a mount from a host where container is running, 00:02:01.122 # so fetch its hostname to easily identify the target swarm worker. 00:02:01.122 container="$(< /etc/hostname) ($agent)" 00:02:01.122 else 00:02:01.122 # Fallback 00:02:01.122 container=$agent 00:02:01.122 fi 00:02:01.122 fi 00:02:01.122 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:01.122 00:02:01.392 [Pipeline] } 00:02:01.407 [Pipeline] // withEnv 00:02:01.415 [Pipeline] setCustomBuildProperty 00:02:01.428 [Pipeline] stage 00:02:01.431 [Pipeline] { (Tests) 00:02:01.446 [Pipeline] sh 00:02:01.724 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:01.996 [Pipeline] sh 00:02:02.276 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:02.549 [Pipeline] timeout 00:02:02.550 Timeout set to expire in 1 hr 0 min 00:02:02.551 [Pipeline] { 00:02:02.566 [Pipeline] sh 00:02:02.844 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:03.411 HEAD is now at 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:02:03.421 [Pipeline] sh 00:02:03.699 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:04.014 [Pipeline] sh 00:02:04.293 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:04.565 [Pipeline] sh 00:02:04.845 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:02:05.104 ++ readlink -f spdk_repo 00:02:05.104 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:05.104 + [[ -n /home/vagrant/spdk_repo ]] 00:02:05.104 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:05.104 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:05.104 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:05.104 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:05.104 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:05.104 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:05.104 + cd /home/vagrant/spdk_repo 00:02:05.104 + source /etc/os-release 00:02:05.104 ++ NAME='Fedora Linux' 00:02:05.104 ++ VERSION='39 (Cloud Edition)' 00:02:05.104 ++ ID=fedora 00:02:05.104 ++ VERSION_ID=39 00:02:05.104 ++ VERSION_CODENAME= 00:02:05.104 ++ PLATFORM_ID=platform:f39 00:02:05.104 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:05.104 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:05.104 ++ LOGO=fedora-logo-icon 00:02:05.104 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:05.104 ++ HOME_URL=https://fedoraproject.org/ 00:02:05.104 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:05.104 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:05.104 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:05.104 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:05.104 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:05.104 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:05.104 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:05.104 ++ SUPPORT_END=2024-11-12 00:02:05.104 ++ VARIANT='Cloud Edition' 00:02:05.104 ++ VARIANT_ID=cloud 00:02:05.104 + uname -a 00:02:05.104 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:05.104 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:05.104 Hugepages 00:02:05.104 node hugesize free / total 00:02:05.104 node0 1048576kB 0 / 0 00:02:05.104 node0 2048kB 0 / 0 00:02:05.104 00:02:05.104 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:05.104 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:05.104 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:05.104 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:05.104 + rm -f /tmp/spdk-ld-path 00:02:05.104 + source autorun-spdk.conf 00:02:05.104 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.104 ++ SPDK_TEST_NVMF=1 00:02:05.104 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:05.104 ++ SPDK_TEST_USDT=1 00:02:05.104 ++ SPDK_RUN_UBSAN=1 00:02:05.104 ++ SPDK_TEST_NVMF_MDNS=1 00:02:05.104 ++ NET_TYPE=virt 00:02:05.104 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:05.104 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:05.104 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:05.104 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:05.104 ++ RUN_NIGHTLY=1 00:02:05.104 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:05.104 + [[ -n '' ]] 00:02:05.104 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:05.362 + for M in /var/spdk/build-*-manifest.txt 00:02:05.362 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:05.362 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:05.362 + for M in /var/spdk/build-*-manifest.txt 00:02:05.363 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:05.363 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:05.363 + for M in /var/spdk/build-*-manifest.txt 00:02:05.363 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:05.363 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:05.363 ++ uname 00:02:05.363 + [[ Linux == \L\i\n\u\x ]] 00:02:05.363 + sudo dmesg -T 00:02:05.363 + sudo dmesg --clear 00:02:05.363 + dmesg_pid=5964 00:02:05.363 + sudo dmesg -Tw 00:02:05.363 + [[ Fedora Linux == FreeBSD ]] 00:02:05.363 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:05.363 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:05.363 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:05.363 + [[ -x /usr/src/fio-static/fio ]] 00:02:05.363 + export FIO_BIN=/usr/src/fio-static/fio 00:02:05.363 + FIO_BIN=/usr/src/fio-static/fio 00:02:05.363 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:05.363 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:05.363 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:05.363 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:05.363 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:05.363 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:05.363 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:05.363 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:05.363 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:05.363 Test configuration: 00:02:05.363 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.363 SPDK_TEST_NVMF=1 00:02:05.363 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:05.363 SPDK_TEST_USDT=1 00:02:05.363 SPDK_RUN_UBSAN=1 00:02:05.363 SPDK_TEST_NVMF_MDNS=1 00:02:05.363 NET_TYPE=virt 00:02:05.363 SPDK_JSONRPC_GO_CLIENT=1 00:02:05.363 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:05.363 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:05.363 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:05.363 RUN_NIGHTLY=1 07:08:07 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:05.363 07:08:07 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:05.363 07:08:07 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:05.363 07:08:07 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:05.363 07:08:07 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.363 07:08:07 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.363 07:08:07 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.363 07:08:07 -- paths/export.sh@5 -- $ export PATH 00:02:05.363 07:08:07 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.363 07:08:07 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:05.363 07:08:07 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:05.363 07:08:07 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1730704087.XXXXXX 00:02:05.363 07:08:07 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1730704087.CBEMgt 00:02:05.363 07:08:07 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:05.363 07:08:07 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:02:05.363 07:08:07 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:05.363 07:08:07 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:05.363 07:08:07 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:05.363 07:08:07 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:05.363 07:08:07 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:05.363 07:08:07 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:02:05.363 07:08:07 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.622 07:08:07 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:02:05.622 07:08:07 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:05.622 07:08:07 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:05.622 07:08:07 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:05.622 07:08:07 -- spdk/autobuild.sh@16 -- $ date -u 00:02:05.622 Mon Nov 4 07:08:07 AM UTC 2024 00:02:05.622 07:08:07 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:05.622 LTS-66-g726a04d70 00:02:05.622 07:08:07 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:05.622 07:08:07 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:05.622 07:08:07 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:05.622 07:08:07 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:05.622 07:08:07 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:05.622 07:08:07 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.622 ************************************ 00:02:05.622 START TEST ubsan 00:02:05.622 ************************************ 00:02:05.622 using ubsan 00:02:05.622 07:08:07 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:02:05.622 00:02:05.622 real 0m0.000s 00:02:05.622 user 0m0.000s 00:02:05.622 sys 0m0.000s 00:02:05.622 07:08:07 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:05.622 07:08:07 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.622 ************************************ 00:02:05.622 END TEST ubsan 00:02:05.622 ************************************ 00:02:05.622 07:08:07 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:05.622 07:08:07 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:05.622 07:08:07 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:05.623 07:08:07 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:02:05.623 07:08:07 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:05.623 07:08:07 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.623 ************************************ 00:02:05.623 START TEST build_native_dpdk 00:02:05.623 ************************************ 00:02:05.623 07:08:07 -- common/autotest_common.sh@1104 -- $ _build_native_dpdk 00:02:05.623 07:08:07 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:05.623 07:08:07 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:05.623 07:08:07 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:05.623 07:08:07 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:05.623 07:08:07 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:05.623 07:08:07 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:05.623 07:08:07 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:05.623 07:08:07 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:05.623 07:08:07 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:05.623 07:08:07 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:05.623 07:08:07 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:05.623 07:08:07 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:05.623 07:08:07 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:05.623 07:08:07 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:05.623 07:08:07 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:05.623 07:08:07 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:05.623 07:08:07 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:05.623 07:08:07 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:05.623 07:08:07 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:05.623 07:08:07 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:05.623 eeb0605f11 version: 23.11.0 00:02:05.623 238778122a doc: update release notes for 23.11 00:02:05.623 46aa6b3cfc doc: fix description of RSS features 00:02:05.623 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:05.623 7e421ae345 devtools: support skipping forbid rule check 00:02:05.623 07:08:07 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:05.623 07:08:07 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:05.623 07:08:07 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:05.623 07:08:07 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:05.623 07:08:07 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:05.623 07:08:07 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:05.623 07:08:07 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:05.623 07:08:07 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:05.623 07:08:07 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:05.623 07:08:07 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:05.623 07:08:07 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:05.623 07:08:07 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:05.623 07:08:07 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:05.623 07:08:07 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:05.623 07:08:07 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:05.623 07:08:07 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:05.623 07:08:07 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:05.623 07:08:07 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:05.623 07:08:07 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:05.623 07:08:07 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:05.623 07:08:07 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:05.623 07:08:07 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:05.623 07:08:07 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:05.623 07:08:07 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:05.623 07:08:07 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:05.623 07:08:07 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:05.623 07:08:07 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:05.623 07:08:07 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:05.623 07:08:07 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:05.623 07:08:07 -- scripts/common.sh@343 -- $ case "$op" in 00:02:05.623 07:08:07 -- scripts/common.sh@344 -- $ : 1 00:02:05.623 07:08:07 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:05.623 07:08:07 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:05.623 07:08:07 -- scripts/common.sh@364 -- $ decimal 23 00:02:05.623 07:08:07 -- scripts/common.sh@352 -- $ local d=23 00:02:05.623 07:08:07 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:05.623 07:08:07 -- scripts/common.sh@354 -- $ echo 23 00:02:05.623 07:08:07 -- scripts/common.sh@364 -- $ ver1[v]=23 00:02:05.623 07:08:07 -- scripts/common.sh@365 -- $ decimal 21 00:02:05.623 07:08:07 -- scripts/common.sh@352 -- $ local d=21 00:02:05.623 07:08:07 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:05.623 07:08:07 -- scripts/common.sh@354 -- $ echo 21 00:02:05.623 07:08:07 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:05.623 07:08:07 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:05.623 07:08:07 -- scripts/common.sh@366 -- $ return 1 00:02:05.623 07:08:07 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:05.623 patching file config/rte_config.h 00:02:05.623 Hunk #1 succeeded at 60 (offset 1 line). 00:02:05.623 07:08:07 -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:05.623 07:08:07 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:05.623 07:08:07 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:05.623 07:08:07 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:05.623 07:08:07 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:05.623 07:08:07 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:05.623 07:08:07 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:05.623 07:08:07 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:05.623 07:08:07 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:05.623 07:08:07 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:05.623 07:08:07 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:05.623 07:08:07 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:05.623 07:08:07 -- scripts/common.sh@343 -- $ case "$op" in 00:02:05.623 07:08:07 -- scripts/common.sh@344 -- $ : 1 00:02:05.623 07:08:07 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:05.623 07:08:07 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:05.623 07:08:07 -- scripts/common.sh@364 -- $ decimal 23 00:02:05.623 07:08:07 -- scripts/common.sh@352 -- $ local d=23 00:02:05.623 07:08:07 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:05.623 07:08:07 -- scripts/common.sh@354 -- $ echo 23 00:02:05.623 07:08:07 -- scripts/common.sh@364 -- $ ver1[v]=23 00:02:05.623 07:08:07 -- scripts/common.sh@365 -- $ decimal 24 00:02:05.623 07:08:07 -- scripts/common.sh@352 -- $ local d=24 00:02:05.623 07:08:07 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:05.623 07:08:07 -- scripts/common.sh@354 -- $ echo 24 00:02:05.623 07:08:07 -- scripts/common.sh@365 -- $ ver2[v]=24 00:02:05.623 07:08:07 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:05.623 07:08:07 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:02:05.623 07:08:07 -- scripts/common.sh@367 -- $ return 0 00:02:05.623 07:08:07 -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:05.623 patching file lib/pcapng/rte_pcapng.c 00:02:05.623 07:08:07 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:05.623 07:08:07 -- common/autobuild_common.sh@181 -- $ uname -s 00:02:05.623 07:08:07 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:05.623 07:08:07 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:05.623 07:08:07 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:10.893 The Meson build system 00:02:10.893 Version: 1.5.0 00:02:10.893 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:10.893 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:10.893 Build type: native build 00:02:10.893 Program cat found: YES (/usr/bin/cat) 00:02:10.893 Project name: DPDK 00:02:10.893 Project version: 23.11.0 00:02:10.893 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:10.893 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:10.893 Host machine cpu family: x86_64 00:02:10.893 Host machine cpu: x86_64 00:02:10.893 Message: ## Building in Developer Mode ## 00:02:10.893 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:10.893 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:10.893 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:10.893 Program python3 found: YES (/usr/bin/python3) 00:02:10.893 Program cat found: YES (/usr/bin/cat) 00:02:10.894 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:10.894 Compiler for C supports arguments -march=native: YES 00:02:10.894 Checking for size of "void *" : 8 00:02:10.894 Checking for size of "void *" : 8 (cached) 00:02:10.894 Library m found: YES 00:02:10.894 Library numa found: YES 00:02:10.894 Has header "numaif.h" : YES 00:02:10.894 Library fdt found: NO 00:02:10.894 Library execinfo found: NO 00:02:10.894 Has header "execinfo.h" : YES 00:02:10.894 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:10.894 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:10.894 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:10.894 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:10.894 Run-time dependency openssl found: YES 3.1.1 00:02:10.894 Run-time dependency libpcap found: YES 1.10.4 00:02:10.894 Has header "pcap.h" with dependency libpcap: YES 00:02:10.894 Compiler for C supports arguments -Wcast-qual: YES 00:02:10.894 Compiler for C supports arguments -Wdeprecated: YES 00:02:10.894 Compiler for C supports arguments -Wformat: YES 00:02:10.894 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:10.894 Compiler for C supports arguments -Wformat-security: NO 00:02:10.894 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:10.894 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:10.894 Compiler for C supports arguments -Wnested-externs: YES 00:02:10.894 Compiler for C supports arguments -Wold-style-definition: YES 00:02:10.894 Compiler for C supports arguments -Wpointer-arith: YES 00:02:10.894 Compiler for C supports arguments -Wsign-compare: YES 00:02:10.894 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:10.894 Compiler for C supports arguments -Wundef: YES 00:02:10.894 Compiler for C supports arguments -Wwrite-strings: YES 00:02:10.894 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:10.894 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:10.894 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:10.894 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:10.894 Program objdump found: YES (/usr/bin/objdump) 00:02:10.894 Compiler for C supports arguments -mavx512f: YES 00:02:10.894 Checking if "AVX512 checking" compiles: YES 00:02:10.894 Fetching value of define "__SSE4_2__" : 1 00:02:10.894 Fetching value of define "__AES__" : 1 00:02:10.894 Fetching value of define "__AVX__" : 1 00:02:10.894 Fetching value of define "__AVX2__" : 1 00:02:10.894 Fetching value of define "__AVX512BW__" : (undefined) 00:02:10.894 Fetching value of define "__AVX512CD__" : (undefined) 00:02:10.894 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:10.894 Fetching value of define "__AVX512F__" : (undefined) 00:02:10.894 Fetching value of define "__AVX512VL__" : (undefined) 00:02:10.894 Fetching value of define "__PCLMUL__" : 1 00:02:10.894 Fetching value of define "__RDRND__" : 1 00:02:10.894 Fetching value of define "__RDSEED__" : 1 00:02:10.894 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:10.894 Fetching value of define "__znver1__" : (undefined) 00:02:10.894 Fetching value of define "__znver2__" : (undefined) 00:02:10.894 Fetching value of define "__znver3__" : (undefined) 00:02:10.894 Fetching value of define "__znver4__" : (undefined) 00:02:10.894 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:10.894 Message: lib/log: Defining dependency "log" 00:02:10.894 Message: lib/kvargs: Defining dependency "kvargs" 00:02:10.894 Message: lib/telemetry: Defining dependency "telemetry" 00:02:10.894 Checking for function "getentropy" : NO 00:02:10.894 Message: lib/eal: Defining dependency "eal" 00:02:10.894 Message: lib/ring: Defining dependency "ring" 00:02:10.894 Message: lib/rcu: Defining dependency "rcu" 00:02:10.894 Message: lib/mempool: Defining dependency "mempool" 00:02:10.894 Message: lib/mbuf: Defining dependency "mbuf" 00:02:10.894 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:10.894 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:10.894 Compiler for C supports arguments -mpclmul: YES 00:02:10.894 Compiler for C supports arguments -maes: YES 00:02:10.894 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:10.894 Compiler for C supports arguments -mavx512bw: YES 00:02:10.894 Compiler for C supports arguments -mavx512dq: YES 00:02:10.894 Compiler for C supports arguments -mavx512vl: YES 00:02:10.894 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:10.894 Compiler for C supports arguments -mavx2: YES 00:02:10.894 Compiler for C supports arguments -mavx: YES 00:02:10.894 Message: lib/net: Defining dependency "net" 00:02:10.894 Message: lib/meter: Defining dependency "meter" 00:02:10.894 Message: lib/ethdev: Defining dependency "ethdev" 00:02:10.894 Message: lib/pci: Defining dependency "pci" 00:02:10.894 Message: lib/cmdline: Defining dependency "cmdline" 00:02:10.894 Message: lib/metrics: Defining dependency "metrics" 00:02:10.894 Message: lib/hash: Defining dependency "hash" 00:02:10.894 Message: lib/timer: Defining dependency "timer" 00:02:10.894 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:10.894 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:10.894 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:10.894 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:10.894 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:10.894 Message: lib/acl: Defining dependency "acl" 00:02:10.894 Message: lib/bbdev: Defining dependency "bbdev" 00:02:10.894 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:10.894 Run-time dependency libelf found: YES 0.191 00:02:10.894 Message: lib/bpf: Defining dependency "bpf" 00:02:10.894 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:10.894 Message: lib/compressdev: Defining dependency "compressdev" 00:02:10.894 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:10.894 Message: lib/distributor: Defining dependency "distributor" 00:02:10.894 Message: lib/dmadev: Defining dependency "dmadev" 00:02:10.894 Message: lib/efd: Defining dependency "efd" 00:02:10.894 Message: lib/eventdev: Defining dependency "eventdev" 00:02:10.894 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:10.894 Message: lib/gpudev: Defining dependency "gpudev" 00:02:10.894 Message: lib/gro: Defining dependency "gro" 00:02:10.894 Message: lib/gso: Defining dependency "gso" 00:02:10.894 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:10.894 Message: lib/jobstats: Defining dependency "jobstats" 00:02:10.894 Message: lib/latencystats: Defining dependency "latencystats" 00:02:10.894 Message: lib/lpm: Defining dependency "lpm" 00:02:10.894 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:10.894 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:10.894 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:10.894 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:10.894 Message: lib/member: Defining dependency "member" 00:02:10.894 Message: lib/pcapng: Defining dependency "pcapng" 00:02:10.894 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:10.894 Message: lib/power: Defining dependency "power" 00:02:10.894 Message: lib/rawdev: Defining dependency "rawdev" 00:02:10.894 Message: lib/regexdev: Defining dependency "regexdev" 00:02:10.894 Message: lib/mldev: Defining dependency "mldev" 00:02:10.894 Message: lib/rib: Defining dependency "rib" 00:02:10.894 Message: lib/reorder: Defining dependency "reorder" 00:02:10.894 Message: lib/sched: Defining dependency "sched" 00:02:10.894 Message: lib/security: Defining dependency "security" 00:02:10.894 Message: lib/stack: Defining dependency "stack" 00:02:10.894 Has header "linux/userfaultfd.h" : YES 00:02:10.894 Has header "linux/vduse.h" : YES 00:02:10.894 Message: lib/vhost: Defining dependency "vhost" 00:02:10.894 Message: lib/ipsec: Defining dependency "ipsec" 00:02:10.894 Message: lib/pdcp: Defining dependency "pdcp" 00:02:10.894 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:10.894 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:10.894 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:10.894 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:10.894 Message: lib/fib: Defining dependency "fib" 00:02:10.894 Message: lib/port: Defining dependency "port" 00:02:10.894 Message: lib/pdump: Defining dependency "pdump" 00:02:10.894 Message: lib/table: Defining dependency "table" 00:02:10.894 Message: lib/pipeline: Defining dependency "pipeline" 00:02:10.894 Message: lib/graph: Defining dependency "graph" 00:02:10.894 Message: lib/node: Defining dependency "node" 00:02:10.894 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:12.795 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:12.795 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:12.795 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:12.795 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:12.795 Compiler for C supports arguments -Wno-unused-value: YES 00:02:12.795 Compiler for C supports arguments -Wno-format: YES 00:02:12.795 Compiler for C supports arguments -Wno-format-security: YES 00:02:12.795 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:12.795 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:12.795 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:12.795 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:12.795 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:12.795 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:12.795 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:12.795 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:12.795 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:12.795 Has header "sys/epoll.h" : YES 00:02:12.795 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:12.795 Configuring doxy-api-html.conf using configuration 00:02:12.795 Configuring doxy-api-man.conf using configuration 00:02:12.795 Program mandb found: YES (/usr/bin/mandb) 00:02:12.795 Program sphinx-build found: NO 00:02:12.795 Configuring rte_build_config.h using configuration 00:02:12.795 Message: 00:02:12.795 ================= 00:02:12.795 Applications Enabled 00:02:12.795 ================= 00:02:12.795 00:02:12.795 apps: 00:02:12.796 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:12.796 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:12.796 test-pmd, test-regex, test-sad, test-security-perf, 00:02:12.796 00:02:12.796 Message: 00:02:12.796 ================= 00:02:12.796 Libraries Enabled 00:02:12.796 ================= 00:02:12.796 00:02:12.796 libs: 00:02:12.796 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:12.796 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:12.796 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:12.796 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:12.796 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:12.796 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:12.796 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:12.796 00:02:12.796 00:02:12.796 Message: 00:02:12.796 =============== 00:02:12.796 Drivers Enabled 00:02:12.796 =============== 00:02:12.796 00:02:12.796 common: 00:02:12.796 00:02:12.796 bus: 00:02:12.796 pci, vdev, 00:02:12.796 mempool: 00:02:12.796 ring, 00:02:12.796 dma: 00:02:12.796 00:02:12.796 net: 00:02:12.796 i40e, 00:02:12.796 raw: 00:02:12.796 00:02:12.796 crypto: 00:02:12.796 00:02:12.796 compress: 00:02:12.796 00:02:12.796 regex: 00:02:12.796 00:02:12.796 ml: 00:02:12.796 00:02:12.796 vdpa: 00:02:12.796 00:02:12.796 event: 00:02:12.796 00:02:12.796 baseband: 00:02:12.796 00:02:12.796 gpu: 00:02:12.796 00:02:12.796 00:02:12.796 Message: 00:02:12.796 ================= 00:02:12.796 Content Skipped 00:02:12.796 ================= 00:02:12.796 00:02:12.796 apps: 00:02:12.796 00:02:12.796 libs: 00:02:12.796 00:02:12.796 drivers: 00:02:12.796 common/cpt: not in enabled drivers build config 00:02:12.796 common/dpaax: not in enabled drivers build config 00:02:12.796 common/iavf: not in enabled drivers build config 00:02:12.796 common/idpf: not in enabled drivers build config 00:02:12.796 common/mvep: not in enabled drivers build config 00:02:12.796 common/octeontx: not in enabled drivers build config 00:02:12.796 bus/auxiliary: not in enabled drivers build config 00:02:12.796 bus/cdx: not in enabled drivers build config 00:02:12.796 bus/dpaa: not in enabled drivers build config 00:02:12.796 bus/fslmc: not in enabled drivers build config 00:02:12.796 bus/ifpga: not in enabled drivers build config 00:02:12.796 bus/platform: not in enabled drivers build config 00:02:12.796 bus/vmbus: not in enabled drivers build config 00:02:12.796 common/cnxk: not in enabled drivers build config 00:02:12.796 common/mlx5: not in enabled drivers build config 00:02:12.796 common/nfp: not in enabled drivers build config 00:02:12.796 common/qat: not in enabled drivers build config 00:02:12.796 common/sfc_efx: not in enabled drivers build config 00:02:12.796 mempool/bucket: not in enabled drivers build config 00:02:12.796 mempool/cnxk: not in enabled drivers build config 00:02:12.796 mempool/dpaa: not in enabled drivers build config 00:02:12.796 mempool/dpaa2: not in enabled drivers build config 00:02:12.796 mempool/octeontx: not in enabled drivers build config 00:02:12.796 mempool/stack: not in enabled drivers build config 00:02:12.796 dma/cnxk: not in enabled drivers build config 00:02:12.796 dma/dpaa: not in enabled drivers build config 00:02:12.796 dma/dpaa2: not in enabled drivers build config 00:02:12.796 dma/hisilicon: not in enabled drivers build config 00:02:12.796 dma/idxd: not in enabled drivers build config 00:02:12.796 dma/ioat: not in enabled drivers build config 00:02:12.796 dma/skeleton: not in enabled drivers build config 00:02:12.796 net/af_packet: not in enabled drivers build config 00:02:12.796 net/af_xdp: not in enabled drivers build config 00:02:12.796 net/ark: not in enabled drivers build config 00:02:12.796 net/atlantic: not in enabled drivers build config 00:02:12.796 net/avp: not in enabled drivers build config 00:02:12.796 net/axgbe: not in enabled drivers build config 00:02:12.796 net/bnx2x: not in enabled drivers build config 00:02:12.796 net/bnxt: not in enabled drivers build config 00:02:12.796 net/bonding: not in enabled drivers build config 00:02:12.796 net/cnxk: not in enabled drivers build config 00:02:12.796 net/cpfl: not in enabled drivers build config 00:02:12.796 net/cxgbe: not in enabled drivers build config 00:02:12.796 net/dpaa: not in enabled drivers build config 00:02:12.796 net/dpaa2: not in enabled drivers build config 00:02:12.796 net/e1000: not in enabled drivers build config 00:02:12.796 net/ena: not in enabled drivers build config 00:02:12.796 net/enetc: not in enabled drivers build config 00:02:12.796 net/enetfec: not in enabled drivers build config 00:02:12.796 net/enic: not in enabled drivers build config 00:02:12.796 net/failsafe: not in enabled drivers build config 00:02:12.796 net/fm10k: not in enabled drivers build config 00:02:12.796 net/gve: not in enabled drivers build config 00:02:12.796 net/hinic: not in enabled drivers build config 00:02:12.796 net/hns3: not in enabled drivers build config 00:02:12.796 net/iavf: not in enabled drivers build config 00:02:12.796 net/ice: not in enabled drivers build config 00:02:12.796 net/idpf: not in enabled drivers build config 00:02:12.796 net/igc: not in enabled drivers build config 00:02:12.796 net/ionic: not in enabled drivers build config 00:02:12.796 net/ipn3ke: not in enabled drivers build config 00:02:12.796 net/ixgbe: not in enabled drivers build config 00:02:12.796 net/mana: not in enabled drivers build config 00:02:12.796 net/memif: not in enabled drivers build config 00:02:12.796 net/mlx4: not in enabled drivers build config 00:02:12.796 net/mlx5: not in enabled drivers build config 00:02:12.796 net/mvneta: not in enabled drivers build config 00:02:12.796 net/mvpp2: not in enabled drivers build config 00:02:12.796 net/netvsc: not in enabled drivers build config 00:02:12.796 net/nfb: not in enabled drivers build config 00:02:12.796 net/nfp: not in enabled drivers build config 00:02:12.796 net/ngbe: not in enabled drivers build config 00:02:12.796 net/null: not in enabled drivers build config 00:02:12.796 net/octeontx: not in enabled drivers build config 00:02:12.796 net/octeon_ep: not in enabled drivers build config 00:02:12.796 net/pcap: not in enabled drivers build config 00:02:12.796 net/pfe: not in enabled drivers build config 00:02:12.796 net/qede: not in enabled drivers build config 00:02:12.796 net/ring: not in enabled drivers build config 00:02:12.796 net/sfc: not in enabled drivers build config 00:02:12.796 net/softnic: not in enabled drivers build config 00:02:12.796 net/tap: not in enabled drivers build config 00:02:12.796 net/thunderx: not in enabled drivers build config 00:02:12.796 net/txgbe: not in enabled drivers build config 00:02:12.796 net/vdev_netvsc: not in enabled drivers build config 00:02:12.796 net/vhost: not in enabled drivers build config 00:02:12.796 net/virtio: not in enabled drivers build config 00:02:12.796 net/vmxnet3: not in enabled drivers build config 00:02:12.796 raw/cnxk_bphy: not in enabled drivers build config 00:02:12.796 raw/cnxk_gpio: not in enabled drivers build config 00:02:12.796 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:12.796 raw/ifpga: not in enabled drivers build config 00:02:12.796 raw/ntb: not in enabled drivers build config 00:02:12.796 raw/skeleton: not in enabled drivers build config 00:02:12.796 crypto/armv8: not in enabled drivers build config 00:02:12.796 crypto/bcmfs: not in enabled drivers build config 00:02:12.796 crypto/caam_jr: not in enabled drivers build config 00:02:12.796 crypto/ccp: not in enabled drivers build config 00:02:12.796 crypto/cnxk: not in enabled drivers build config 00:02:12.796 crypto/dpaa_sec: not in enabled drivers build config 00:02:12.796 crypto/dpaa2_sec: not in enabled drivers build config 00:02:12.796 crypto/ipsec_mb: not in enabled drivers build config 00:02:12.796 crypto/mlx5: not in enabled drivers build config 00:02:12.796 crypto/mvsam: not in enabled drivers build config 00:02:12.796 crypto/nitrox: not in enabled drivers build config 00:02:12.796 crypto/null: not in enabled drivers build config 00:02:12.796 crypto/octeontx: not in enabled drivers build config 00:02:12.796 crypto/openssl: not in enabled drivers build config 00:02:12.796 crypto/scheduler: not in enabled drivers build config 00:02:12.796 crypto/uadk: not in enabled drivers build config 00:02:12.796 crypto/virtio: not in enabled drivers build config 00:02:12.796 compress/isal: not in enabled drivers build config 00:02:12.796 compress/mlx5: not in enabled drivers build config 00:02:12.796 compress/octeontx: not in enabled drivers build config 00:02:12.796 compress/zlib: not in enabled drivers build config 00:02:12.796 regex/mlx5: not in enabled drivers build config 00:02:12.796 regex/cn9k: not in enabled drivers build config 00:02:12.796 ml/cnxk: not in enabled drivers build config 00:02:12.796 vdpa/ifc: not in enabled drivers build config 00:02:12.796 vdpa/mlx5: not in enabled drivers build config 00:02:12.796 vdpa/nfp: not in enabled drivers build config 00:02:12.796 vdpa/sfc: not in enabled drivers build config 00:02:12.796 event/cnxk: not in enabled drivers build config 00:02:12.796 event/dlb2: not in enabled drivers build config 00:02:12.796 event/dpaa: not in enabled drivers build config 00:02:12.796 event/dpaa2: not in enabled drivers build config 00:02:12.796 event/dsw: not in enabled drivers build config 00:02:12.796 event/opdl: not in enabled drivers build config 00:02:12.796 event/skeleton: not in enabled drivers build config 00:02:12.796 event/sw: not in enabled drivers build config 00:02:12.796 event/octeontx: not in enabled drivers build config 00:02:12.796 baseband/acc: not in enabled drivers build config 00:02:12.796 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:12.796 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:12.796 baseband/la12xx: not in enabled drivers build config 00:02:12.796 baseband/null: not in enabled drivers build config 00:02:12.796 baseband/turbo_sw: not in enabled drivers build config 00:02:12.796 gpu/cuda: not in enabled drivers build config 00:02:12.796 00:02:12.796 00:02:12.796 Build targets in project: 220 00:02:12.796 00:02:12.796 DPDK 23.11.0 00:02:12.796 00:02:12.796 User defined options 00:02:12.796 libdir : lib 00:02:12.796 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:12.797 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:12.797 c_link_args : 00:02:12.797 enable_docs : false 00:02:12.797 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:12.797 enable_kmods : false 00:02:12.797 machine : native 00:02:12.797 tests : false 00:02:12.797 00:02:12.797 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:12.797 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:12.797 07:08:14 -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:12.797 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:12.797 [1/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:12.797 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:12.797 [3/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:13.055 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:13.055 [5/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:13.055 [6/710] Linking static target lib/librte_kvargs.a 00:02:13.055 [7/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:13.055 [8/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:13.055 [9/710] Linking static target lib/librte_log.a 00:02:13.055 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:13.313 [11/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.313 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:13.571 [13/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.571 [14/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:13.571 [15/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:13.571 [16/710] Linking target lib/librte_log.so.24.0 00:02:13.571 [17/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:13.571 [18/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:13.829 [19/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:13.829 [20/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:13.829 [21/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:13.829 [22/710] Linking target lib/librte_kvargs.so.24.0 00:02:13.829 [23/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:13.829 [24/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:14.087 [25/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:14.087 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:14.087 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:14.087 [28/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:14.087 [29/710] Linking static target lib/librte_telemetry.a 00:02:14.087 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:14.345 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:14.345 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:14.603 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:14.603 [34/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:14.603 [35/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:14.603 [36/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.603 [37/710] Linking target lib/librte_telemetry.so.24.0 00:02:14.603 [38/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:14.603 [39/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:14.603 [40/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:14.603 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:14.603 [42/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:14.603 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:14.862 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:14.862 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:15.120 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:15.120 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:15.120 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:15.120 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:15.379 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:15.379 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:15.379 [52/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:15.379 [53/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:15.379 [54/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:15.379 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:15.638 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:15.638 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:15.638 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:15.638 [59/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:15.638 [60/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:15.638 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:15.638 [62/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:15.897 [63/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:15.897 [64/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:15.897 [65/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:15.897 [66/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:15.897 [67/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:15.897 [68/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:16.155 [69/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:16.155 [70/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:16.414 [71/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:16.414 [72/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:16.414 [73/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:16.414 [74/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:16.414 [75/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:16.414 [76/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:16.414 [77/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:16.414 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:16.673 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:16.673 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:16.673 [81/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:16.931 [82/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:16.931 [83/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:16.931 [84/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:16.931 [85/710] Linking static target lib/librte_ring.a 00:02:16.931 [86/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:17.190 [87/710] Linking static target lib/librte_eal.a 00:02:17.190 [88/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:17.190 [89/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:17.190 [90/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.449 [91/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:17.449 [92/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:17.449 [93/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:17.449 [94/710] Linking static target lib/librte_mempool.a 00:02:17.449 [95/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:17.449 [96/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:17.449 [97/710] Linking static target lib/librte_rcu.a 00:02:17.707 [98/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:17.707 [99/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:17.707 [100/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.707 [101/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:17.977 [102/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:17.977 [103/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:17.978 [104/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.978 [105/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:17.978 [106/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:18.237 [107/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:18.237 [108/710] Linking static target lib/librte_net.a 00:02:18.237 [109/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:18.237 [110/710] Linking static target lib/librte_mbuf.a 00:02:18.496 [111/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:18.496 [112/710] Linking static target lib/librte_meter.a 00:02:18.496 [113/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.496 [114/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:18.496 [115/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:18.496 [116/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.496 [117/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:18.496 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:18.755 [119/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.321 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:19.321 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:19.580 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:19.580 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:19.580 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:19.580 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:19.580 [126/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:19.580 [127/710] Linking static target lib/librte_pci.a 00:02:19.580 [128/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:19.839 [129/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:19.839 [130/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:19.839 [131/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.839 [132/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:19.839 [133/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:19.839 [134/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:19.839 [135/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:19.839 [136/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:20.098 [137/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:20.098 [138/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:20.098 [139/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:20.098 [140/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:20.098 [141/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:20.358 [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:20.358 [143/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:20.358 [144/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:20.358 [145/710] Linking static target lib/librte_cmdline.a 00:02:20.616 [146/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:20.617 [147/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:20.617 [148/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:20.617 [149/710] Linking static target lib/librte_metrics.a 00:02:20.617 [150/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:20.875 [151/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.134 [152/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.134 [153/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:21.134 [154/710] Linking static target lib/librte_timer.a 00:02:21.134 [155/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:21.703 [156/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.703 [157/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:21.703 [158/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:21.961 [159/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:21.961 [160/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:22.529 [161/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:22.529 [162/710] Linking static target lib/librte_ethdev.a 00:02:22.529 [163/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:22.529 [164/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:22.529 [165/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:22.529 [166/710] Linking static target lib/librte_bitratestats.a 00:02:22.529 [167/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.788 [168/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:22.788 [169/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:22.788 [170/710] Linking static target lib/librte_bbdev.a 00:02:22.788 [171/710] Linking target lib/librte_eal.so.24.0 00:02:22.788 [172/710] Linking static target lib/librte_hash.a 00:02:22.788 [173/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.788 [174/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:22.788 [175/710] Linking target lib/librte_ring.so.24.0 00:02:23.047 [176/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:23.047 [177/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:23.047 [178/710] Linking target lib/librte_meter.so.24.0 00:02:23.047 [179/710] Linking target lib/librte_rcu.so.24.0 00:02:23.047 [180/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:23.047 [181/710] Linking target lib/librte_mempool.so.24.0 00:02:23.047 [182/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:23.047 [183/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:23.306 [184/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:23.306 [185/710] Linking target lib/librte_pci.so.24.0 00:02:23.306 [186/710] Linking target lib/librte_mbuf.so.24.0 00:02:23.306 [187/710] Linking target lib/librte_timer.so.24.0 00:02:23.306 [188/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.306 [189/710] Linking static target lib/acl/libavx2_tmp.a 00:02:23.306 [190/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.306 [191/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:23.306 [192/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:23.306 [193/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:23.306 [194/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:23.306 [195/710] Linking static target lib/acl/libavx512_tmp.a 00:02:23.306 [196/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:23.306 [197/710] Linking target lib/librte_bbdev.so.24.0 00:02:23.306 [198/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:23.306 [199/710] Linking target lib/librte_net.so.24.0 00:02:23.564 [200/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:23.564 [201/710] Linking target lib/librte_cmdline.so.24.0 00:02:23.564 [202/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:23.564 [203/710] Linking target lib/librte_hash.so.24.0 00:02:23.564 [204/710] Linking static target lib/librte_acl.a 00:02:23.564 [205/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:23.564 [206/710] Linking static target lib/librte_cfgfile.a 00:02:23.823 [207/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:23.823 [208/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:23.823 [209/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.823 [210/710] Linking target lib/librte_acl.so.24.0 00:02:24.082 [211/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.082 [212/710] Linking target lib/librte_cfgfile.so.24.0 00:02:24.082 [213/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:24.082 [214/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:24.082 [215/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:24.082 [216/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:24.341 [217/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:24.341 [218/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:24.599 [219/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:24.599 [220/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:24.599 [221/710] Linking static target lib/librte_bpf.a 00:02:24.599 [222/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:24.599 [223/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:24.599 [224/710] Linking static target lib/librte_compressdev.a 00:02:24.859 [225/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.859 [226/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:24.859 [227/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:25.118 [228/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:25.118 [229/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:25.118 [230/710] Linking static target lib/librte_distributor.a 00:02:25.118 [231/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:25.118 [232/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.118 [233/710] Linking target lib/librte_compressdev.so.24.0 00:02:25.377 [234/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.377 [235/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:25.377 [236/710] Linking static target lib/librte_dmadev.a 00:02:25.377 [237/710] Linking target lib/librte_distributor.so.24.0 00:02:25.636 [238/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:25.636 [239/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.636 [240/710] Linking target lib/librte_dmadev.so.24.0 00:02:25.894 [241/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:25.894 [242/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:26.152 [243/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:26.152 [244/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:26.152 [245/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:26.152 [246/710] Linking static target lib/librte_efd.a 00:02:26.416 [247/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:26.416 [248/710] Linking static target lib/librte_cryptodev.a 00:02:26.416 [249/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:26.416 [250/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.416 [251/710] Linking target lib/librte_efd.so.24.0 00:02:26.700 [252/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.971 [253/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:26.971 [254/710] Linking target lib/librte_ethdev.so.24.0 00:02:26.971 [255/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:26.971 [256/710] Linking static target lib/librte_dispatcher.a 00:02:26.971 [257/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:26.971 [258/710] Linking target lib/librte_metrics.so.24.0 00:02:26.971 [259/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:27.229 [260/710] Linking target lib/librte_bpf.so.24.0 00:02:27.229 [261/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:27.229 [262/710] Linking target lib/librte_bitratestats.so.24.0 00:02:27.229 [263/710] Linking static target lib/librte_gpudev.a 00:02:27.229 [264/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:27.229 [265/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:27.229 [266/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:27.229 [267/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:27.229 [268/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.488 [269/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.489 [270/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:27.489 [271/710] Linking target lib/librte_cryptodev.so.24.0 00:02:27.489 [272/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:27.747 [273/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:27.747 [274/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:28.005 [275/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.005 [276/710] Linking target lib/librte_gpudev.so.24.0 00:02:28.005 [277/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:28.005 [278/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:28.005 [279/710] Linking static target lib/librte_eventdev.a 00:02:28.005 [280/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:28.005 [281/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:28.006 [282/710] Linking static target lib/librte_gro.a 00:02:28.006 [283/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:28.006 [284/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:28.264 [285/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:28.264 [286/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.264 [287/710] Linking target lib/librte_gro.so.24.0 00:02:28.264 [288/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:28.523 [289/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:28.523 [290/710] Linking static target lib/librte_gso.a 00:02:28.523 [291/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:28.523 [292/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.782 [293/710] Linking target lib/librte_gso.so.24.0 00:02:28.782 [294/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:28.782 [295/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:28.782 [296/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:28.782 [297/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:28.782 [298/710] Linking static target lib/librte_jobstats.a 00:02:28.782 [299/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:29.040 [300/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:29.040 [301/710] Linking static target lib/librte_ip_frag.a 00:02:29.040 [302/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:29.040 [303/710] Linking static target lib/librte_latencystats.a 00:02:29.040 [304/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.299 [305/710] Linking target lib/librte_jobstats.so.24.0 00:02:29.299 [306/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.299 [307/710] Linking target lib/librte_ip_frag.so.24.0 00:02:29.299 [308/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.299 [309/710] Linking target lib/librte_latencystats.so.24.0 00:02:29.299 [310/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:29.299 [311/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:29.299 [312/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:29.299 [313/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:29.558 [314/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:29.558 [315/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:29.558 [316/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:29.558 [317/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:29.817 [318/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.817 [319/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:29.817 [320/710] Linking static target lib/librte_lpm.a 00:02:30.076 [321/710] Linking target lib/librte_eventdev.so.24.0 00:02:30.076 [322/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:30.076 [323/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:30.076 [324/710] Linking target lib/librte_dispatcher.so.24.0 00:02:30.076 [325/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:30.076 [326/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:30.076 [327/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:30.335 [328/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:30.335 [329/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.335 [330/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:30.335 [331/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:30.335 [332/710] Linking static target lib/librte_pcapng.a 00:02:30.335 [333/710] Linking target lib/librte_lpm.so.24.0 00:02:30.335 [334/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:30.594 [335/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.594 [336/710] Linking target lib/librte_pcapng.so.24.0 00:02:30.594 [337/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:30.594 [338/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:30.594 [339/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:30.854 [340/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:30.854 [341/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:30.854 [342/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:30.854 [343/710] Linking static target lib/librte_member.a 00:02:30.854 [344/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:30.854 [345/710] Linking static target lib/librte_regexdev.a 00:02:30.854 [346/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:30.854 [347/710] Linking static target lib/librte_power.a 00:02:30.854 [348/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:30.854 [349/710] Linking static target lib/librte_rawdev.a 00:02:31.113 [350/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:31.114 [351/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:31.114 [352/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.373 [353/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:31.373 [354/710] Linking target lib/librte_member.so.24.0 00:02:31.373 [355/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:31.373 [356/710] Linking static target lib/librte_mldev.a 00:02:31.373 [357/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:31.373 [358/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.373 [359/710] Linking target lib/librte_rawdev.so.24.0 00:02:31.633 [360/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.633 [361/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:31.633 [362/710] Linking target lib/librte_power.so.24.0 00:02:31.633 [363/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.633 [364/710] Linking target lib/librte_regexdev.so.24.0 00:02:31.892 [365/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:31.892 [366/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:31.893 [367/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:31.893 [368/710] Linking static target lib/librte_reorder.a 00:02:31.893 [369/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:31.893 [370/710] Linking static target lib/librte_rib.a 00:02:31.893 [371/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:32.152 [372/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:32.152 [373/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:32.152 [374/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:32.152 [375/710] Linking static target lib/librte_stack.a 00:02:32.152 [376/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.152 [377/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:32.152 [378/710] Linking static target lib/librte_security.a 00:02:32.152 [379/710] Linking target lib/librte_reorder.so.24.0 00:02:32.412 [380/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.412 [381/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:32.412 [382/710] Linking target lib/librte_rib.so.24.0 00:02:32.412 [383/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.412 [384/710] Linking target lib/librte_stack.so.24.0 00:02:32.412 [385/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:32.671 [386/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.671 [387/710] Linking target lib/librte_mldev.so.24.0 00:02:32.671 [388/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.671 [389/710] Linking target lib/librte_security.so.24.0 00:02:32.671 [390/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:32.671 [391/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:32.671 [392/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:32.931 [393/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:32.931 [394/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:32.931 [395/710] Linking static target lib/librte_sched.a 00:02:33.190 [396/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.190 [397/710] Linking target lib/librte_sched.so.24.0 00:02:33.449 [398/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:33.449 [399/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:33.449 [400/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:33.709 [401/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:33.709 [402/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:33.968 [403/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:33.968 [404/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:33.968 [405/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:34.227 [406/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:34.227 [407/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:34.487 [408/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:34.487 [409/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:34.487 [410/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:34.487 [411/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:34.487 [412/710] Linking static target lib/librte_ipsec.a 00:02:34.487 [413/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:34.746 [414/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.746 [415/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:34.746 [416/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:35.006 [417/710] Linking target lib/librte_ipsec.so.24.0 00:02:35.006 [418/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:35.006 [419/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:35.006 [420/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:35.006 [421/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:35.006 [422/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:35.006 [423/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:35.945 [424/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:35.945 [425/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:35.945 [426/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:35.945 [427/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:35.945 [428/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:35.945 [429/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:35.945 [430/710] Linking static target lib/librte_fib.a 00:02:35.945 [431/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:35.945 [432/710] Linking static target lib/librte_pdcp.a 00:02:36.204 [433/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.204 [434/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.204 [435/710] Linking target lib/librte_fib.so.24.0 00:02:36.204 [436/710] Linking target lib/librte_pdcp.so.24.0 00:02:36.464 [437/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:36.723 [438/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:36.723 [439/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:36.723 [440/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:36.723 [441/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:36.982 [442/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:36.982 [443/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:36.982 [444/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:37.241 [445/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:37.241 [446/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:37.241 [447/710] Linking static target lib/librte_port.a 00:02:37.501 [448/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:37.501 [449/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:37.501 [450/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:37.760 [451/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:37.760 [452/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:37.760 [453/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.760 [454/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:37.760 [455/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:37.760 [456/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:37.760 [457/710] Linking target lib/librte_port.so.24.0 00:02:37.760 [458/710] Linking static target lib/librte_pdump.a 00:02:38.019 [459/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:38.019 [460/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.019 [461/710] Linking target lib/librte_pdump.so.24.0 00:02:38.277 [462/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:38.535 [463/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:38.535 [464/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:38.535 [465/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:38.535 [466/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:38.794 [467/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:38.794 [468/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:39.054 [469/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:39.054 [470/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:39.054 [471/710] Linking static target lib/librte_table.a 00:02:39.054 [472/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:39.054 [473/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:39.622 [474/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.622 [475/710] Linking target lib/librte_table.so.24.0 00:02:39.622 [476/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:39.622 [477/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:39.622 [478/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:39.881 [479/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:39.881 [480/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:40.141 [481/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:40.400 [482/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:40.400 [483/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:40.400 [484/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:40.400 [485/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:40.660 [486/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:40.920 [487/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:41.179 [488/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:41.179 [489/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:41.179 [490/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:41.179 [491/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:41.179 [492/710] Linking static target lib/librte_graph.a 00:02:41.179 [493/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:41.754 [494/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:41.754 [495/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.754 [496/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:41.754 [497/710] Linking target lib/librte_graph.so.24.0 00:02:41.754 [498/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:42.055 [499/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:42.055 [500/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:42.318 [501/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:42.318 [502/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:42.318 [503/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:42.318 [504/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:42.318 [505/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:42.318 [506/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:42.578 [507/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:42.838 [508/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:42.838 [509/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:42.838 [510/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:43.097 [511/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:43.097 [512/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:43.097 [513/710] Linking static target lib/librte_node.a 00:02:43.097 [514/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:43.097 [515/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:43.356 [516/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.357 [517/710] Linking target lib/librte_node.so.24.0 00:02:43.357 [518/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:43.357 [519/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:43.357 [520/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:43.616 [521/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:43.616 [522/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:43.616 [523/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:43.616 [524/710] Linking static target drivers/librte_bus_pci.a 00:02:43.616 [525/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:43.616 [526/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:43.616 [527/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:43.616 [528/710] Linking static target drivers/librte_bus_vdev.a 00:02:43.876 [529/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:43.876 [530/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:43.876 [531/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:43.876 [532/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.876 [533/710] Linking target drivers/librte_bus_vdev.so.24.0 00:02:44.135 [534/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:44.135 [535/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:44.135 [536/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.135 [537/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:44.135 [538/710] Linking target drivers/librte_bus_pci.so.24.0 00:02:44.135 [539/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:44.135 [540/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:44.395 [541/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:44.395 [542/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:44.395 [543/710] Linking static target drivers/librte_mempool_ring.a 00:02:44.395 [544/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:44.395 [545/710] Linking target drivers/librte_mempool_ring.so.24.0 00:02:44.654 [546/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:44.654 [547/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:44.913 [548/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:45.173 [549/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:45.173 [550/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:45.173 [551/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:45.741 [552/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:46.000 [553/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:46.000 [554/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:46.000 [555/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:46.000 [556/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:46.000 [557/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:46.567 [558/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:46.567 [559/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:46.567 [560/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:46.825 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:46.825 [562/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:47.083 [563/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:47.342 [564/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:47.342 [565/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:47.601 [566/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:47.860 [567/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:47.860 [568/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:47.860 [569/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:47.860 [570/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:48.118 [571/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:48.118 [572/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:48.118 [573/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:48.118 [574/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:48.118 [575/710] Linking static target lib/librte_vhost.a 00:02:48.686 [576/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:48.686 [577/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:48.686 [578/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:48.686 [579/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:48.686 [580/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:48.686 [581/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:48.686 [582/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:49.254 [583/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:49.254 [584/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.254 [585/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:49.254 [586/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:49.254 [587/710] Linking static target drivers/librte_net_i40e.a 00:02:49.254 [588/710] Linking target lib/librte_vhost.so.24.0 00:02:49.254 [589/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:49.254 [590/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:49.254 [591/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:49.254 [592/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:49.254 [593/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:49.254 [594/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:49.822 [595/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.822 [596/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:49.822 [597/710] Linking target drivers/librte_net_i40e.so.24.0 00:02:49.822 [598/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:49.822 [599/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:50.081 [600/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:50.339 [601/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:50.339 [602/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:50.339 [603/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:50.598 [604/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:50.598 [605/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:50.598 [606/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:50.598 [607/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:51.166 [608/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:51.166 [609/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:51.166 [610/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:51.166 [611/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:51.425 [612/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:51.425 [613/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:51.425 [614/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:51.425 [615/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:51.425 [616/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:51.425 [617/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:51.684 [618/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:51.942 [619/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:51.942 [620/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:52.201 [621/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:52.201 [622/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:52.201 [623/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:52.460 [624/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:52.460 [625/710] Linking static target lib/librte_pipeline.a 00:02:53.028 [626/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:53.028 [627/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:53.028 [628/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:53.287 [629/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:53.287 [630/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:53.287 [631/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:53.287 [632/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:53.546 [633/710] Linking target app/dpdk-dumpcap 00:02:53.546 [634/710] Linking target app/dpdk-pdump 00:02:53.546 [635/710] Linking target app/dpdk-graph 00:02:53.546 [636/710] Linking target app/dpdk-proc-info 00:02:53.805 [637/710] Linking target app/dpdk-test-acl 00:02:53.805 [638/710] Linking target app/dpdk-test-cmdline 00:02:53.805 [639/710] Linking target app/dpdk-test-compress-perf 00:02:54.064 [640/710] Linking target app/dpdk-test-crypto-perf 00:02:54.064 [641/710] Linking target app/dpdk-test-fib 00:02:54.064 [642/710] Linking target app/dpdk-test-dma-perf 00:02:54.064 [643/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:54.064 [644/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:54.322 [645/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:54.323 [646/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:54.581 [647/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:54.581 [648/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:54.581 [649/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:54.581 [650/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:54.840 [651/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:54.840 [652/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:54.840 [653/710] Linking target app/dpdk-test-gpudev 00:02:54.840 [654/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:55.099 [655/710] Linking target app/dpdk-test-eventdev 00:02:55.099 [656/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:55.099 [657/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.099 [658/710] Linking target lib/librte_pipeline.so.24.0 00:02:55.099 [659/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:55.358 [660/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:55.358 [661/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:55.358 [662/710] Linking target app/dpdk-test-flow-perf 00:02:55.358 [663/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:55.358 [664/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:55.617 [665/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:55.617 [666/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:55.876 [667/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:55.876 [668/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:55.876 [669/710] Linking target app/dpdk-test-bbdev 00:02:55.876 [670/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:55.876 [671/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:56.135 [672/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:56.135 [673/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:56.394 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:56.394 [675/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:56.394 [676/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:56.652 [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:56.653 [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:56.911 [679/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:56.911 [680/710] Linking target app/dpdk-test-pipeline 00:02:56.911 [681/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:56.911 [682/710] Linking target app/dpdk-test-mldev 00:02:57.171 [683/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:57.430 [684/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:57.430 [685/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:57.689 [686/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:57.689 [687/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:57.689 [688/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:57.960 [689/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:57.960 [690/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:58.218 [691/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:58.218 [692/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:58.218 [693/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:58.481 [694/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:58.760 [695/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:59.032 [696/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:59.032 [697/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:59.032 [698/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:59.291 [699/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:59.291 [700/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:59.291 [701/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:59.291 [702/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:59.550 [703/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:59.550 [704/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:59.550 [705/710] Linking target app/dpdk-test-regex 00:02:59.550 [706/710] Linking target app/dpdk-test-sad 00:02:59.809 [707/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:00.068 [708/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:00.327 [709/710] Linking target app/dpdk-testpmd 00:03:00.586 [710/710] Linking target app/dpdk-test-security-perf 00:03:00.586 07:09:02 -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:00.586 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:00.586 [0/1] Installing files. 00:03:00.848 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:00.848 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:00.848 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:00.848 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:00.848 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:00.848 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:00.848 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:00.848 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:00.848 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:00.848 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:00.848 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:00.848 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:00.848 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:00.848 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:00.848 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:00.848 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:00.848 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:00.848 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:00.848 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:00.848 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:00.848 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:00.848 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:00.848 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.850 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.851 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.852 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.853 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.853 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.853 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.853 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.853 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.853 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.853 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.853 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.853 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.853 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:00.853 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:00.853 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:00.853 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:00.853 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:00.853 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:00.853 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:00.853 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:00.853 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:00.853 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:00.853 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.853 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.113 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.375 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.375 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.375 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.375 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:01.375 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.375 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:01.375 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.375 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:01.375 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.375 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:01.375 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.375 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.375 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.375 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.375 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.375 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.375 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.375 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.375 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.375 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.375 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.376 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.376 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.376 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.376 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.376 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.376 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.376 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.376 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.376 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.376 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.378 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.379 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.379 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.379 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.379 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.379 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.379 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.379 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.379 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.379 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.379 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.379 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:01.379 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:01.379 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:01.379 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:01.379 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:01.379 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:01.379 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:01.379 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:01.379 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:01.379 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:01.379 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:01.379 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:01.379 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:01.379 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:01.379 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:01.379 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:01.379 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:01.379 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:01.379 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:01.379 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:01.379 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:01.379 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:01.379 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:01.379 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:01.379 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:01.379 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:01.379 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:01.379 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:01.379 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:01.379 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:01.379 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:01.379 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:01.379 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:01.379 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:01.379 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:01.379 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:01.379 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:01.379 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:01.379 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:01.379 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:01.379 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:01.379 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:01.379 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:01.379 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:01.379 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:01.379 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:01.379 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:01.379 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:01.379 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:01.379 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:01.379 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:01.379 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:01.379 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:01.379 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:01.379 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:01.379 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:01.379 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:01.379 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:01.379 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:01.379 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:01.379 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:01.379 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:01.379 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:01.379 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:01.379 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:01.379 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:01.379 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:01.379 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:01.379 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:01.379 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:01.379 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:01.379 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:01.379 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:01.379 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:01.379 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:01.379 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:01.379 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:01.379 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:01.379 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:01.379 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:01.379 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:01.379 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:01.379 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:01.379 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:01.379 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:01.379 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:01.379 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:01.379 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:01.379 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:01.379 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:01.379 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:01.379 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:01.379 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:01.379 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:01.379 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:01.379 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:01.379 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:01.379 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:01.380 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:01.380 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:01.380 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:01.380 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:01.380 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:01.380 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:01.380 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:01.380 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:01.380 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:01.380 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:01.380 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:01.380 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:01.380 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:01.380 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:01.380 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:01.380 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:01.380 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:01.380 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:01.380 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:01.380 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:01.380 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:01.380 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:01.380 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:01.380 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:01.380 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:01.380 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:01.380 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:01.380 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:01.380 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:01.380 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:01.380 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:01.380 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:01.380 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:01.380 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:01.380 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:01.380 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:01.380 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:01.639 07:09:03 -- common/autobuild_common.sh@192 -- $ uname -s 00:03:01.639 07:09:03 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:01.639 07:09:03 -- common/autobuild_common.sh@203 -- $ cat 00:03:01.639 07:09:03 -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:01.639 00:03:01.639 real 0m55.971s 00:03:01.639 user 6m38.406s 00:03:01.639 sys 1m7.474s 00:03:01.639 07:09:03 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:01.639 ************************************ 00:03:01.639 END TEST build_native_dpdk 00:03:01.639 ************************************ 00:03:01.639 07:09:03 -- common/autotest_common.sh@10 -- $ set +x 00:03:01.639 07:09:03 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:01.639 07:09:03 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:01.639 07:09:03 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:01.639 07:09:03 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:01.639 07:09:03 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:01.639 07:09:03 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:01.639 07:09:03 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:01.639 07:09:03 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang --with-shared 00:03:01.639 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:01.899 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.899 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:01.899 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:02.157 Using 'verbs' RDMA provider 00:03:17.613 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:32.497 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:32.497 go version go1.21.1 linux/amd64 00:03:32.497 Creating mk/config.mk...done. 00:03:32.497 Creating mk/cc.flags.mk...done. 00:03:32.497 Type 'make' to build. 00:03:32.497 07:09:32 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:32.497 07:09:32 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:03:32.497 07:09:32 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:03:32.497 07:09:32 -- common/autotest_common.sh@10 -- $ set +x 00:03:32.497 ************************************ 00:03:32.497 START TEST make 00:03:32.497 ************************************ 00:03:32.497 07:09:32 -- common/autotest_common.sh@1104 -- $ make -j10 00:03:32.497 make[1]: Nothing to be done for 'all'. 00:03:54.430 CC lib/ut/ut.o 00:03:54.431 CC lib/ut_mock/mock.o 00:03:54.431 CC lib/log/log.o 00:03:54.431 CC lib/log/log_flags.o 00:03:54.431 CC lib/log/log_deprecated.o 00:03:54.431 LIB libspdk_ut_mock.a 00:03:54.431 LIB libspdk_ut.a 00:03:54.431 SO libspdk_ut_mock.so.5.0 00:03:54.431 LIB libspdk_log.a 00:03:54.431 SO libspdk_ut.so.1.0 00:03:54.431 SO libspdk_log.so.6.1 00:03:54.431 SYMLINK libspdk_ut_mock.so 00:03:54.431 SYMLINK libspdk_ut.so 00:03:54.431 SYMLINK libspdk_log.so 00:03:54.431 CC lib/util/base64.o 00:03:54.431 CC lib/util/bit_array.o 00:03:54.431 CC lib/util/cpuset.o 00:03:54.431 CC lib/ioat/ioat.o 00:03:54.431 CC lib/util/crc16.o 00:03:54.431 CC lib/util/crc32.o 00:03:54.431 CC lib/util/crc32c.o 00:03:54.431 CXX lib/trace_parser/trace.o 00:03:54.431 CC lib/dma/dma.o 00:03:54.431 CC lib/vfio_user/host/vfio_user_pci.o 00:03:54.431 CC lib/vfio_user/host/vfio_user.o 00:03:54.431 CC lib/util/crc32_ieee.o 00:03:54.431 CC lib/util/crc64.o 00:03:54.431 CC lib/util/dif.o 00:03:54.431 LIB libspdk_dma.a 00:03:54.431 CC lib/util/fd.o 00:03:54.431 SO libspdk_dma.so.3.0 00:03:54.431 CC lib/util/file.o 00:03:54.431 CC lib/util/hexlify.o 00:03:54.431 SYMLINK libspdk_dma.so 00:03:54.431 LIB libspdk_ioat.a 00:03:54.431 CC lib/util/iov.o 00:03:54.431 CC lib/util/math.o 00:03:54.431 SO libspdk_ioat.so.6.0 00:03:54.431 LIB libspdk_vfio_user.a 00:03:54.431 CC lib/util/pipe.o 00:03:54.431 SYMLINK libspdk_ioat.so 00:03:54.431 SO libspdk_vfio_user.so.4.0 00:03:54.431 CC lib/util/strerror_tls.o 00:03:54.431 CC lib/util/string.o 00:03:54.431 CC lib/util/uuid.o 00:03:54.431 CC lib/util/fd_group.o 00:03:54.431 SYMLINK libspdk_vfio_user.so 00:03:54.431 CC lib/util/xor.o 00:03:54.431 CC lib/util/zipf.o 00:03:54.431 LIB libspdk_util.a 00:03:54.431 SO libspdk_util.so.8.0 00:03:54.431 SYMLINK libspdk_util.so 00:03:54.431 LIB libspdk_trace_parser.a 00:03:54.431 SO libspdk_trace_parser.so.4.0 00:03:54.431 CC lib/rdma/common.o 00:03:54.431 CC lib/json/json_parse.o 00:03:54.431 CC lib/json/json_util.o 00:03:54.431 CC lib/rdma/rdma_verbs.o 00:03:54.431 CC lib/conf/conf.o 00:03:54.431 CC lib/idxd/idxd.o 00:03:54.431 CC lib/vmd/vmd.o 00:03:54.431 CC lib/json/json_write.o 00:03:54.431 CC lib/env_dpdk/env.o 00:03:54.431 SYMLINK libspdk_trace_parser.so 00:03:54.431 CC lib/vmd/led.o 00:03:54.431 CC lib/env_dpdk/memory.o 00:03:54.431 CC lib/env_dpdk/pci.o 00:03:54.431 LIB libspdk_conf.a 00:03:54.431 CC lib/idxd/idxd_user.o 00:03:54.431 CC lib/idxd/idxd_kernel.o 00:03:54.431 SO libspdk_conf.so.5.0 00:03:54.431 SYMLINK libspdk_conf.so 00:03:54.431 CC lib/env_dpdk/init.o 00:03:54.431 LIB libspdk_json.a 00:03:54.431 LIB libspdk_rdma.a 00:03:54.431 SO libspdk_json.so.5.1 00:03:54.431 SO libspdk_rdma.so.5.0 00:03:54.431 SYMLINK libspdk_rdma.so 00:03:54.431 CC lib/env_dpdk/threads.o 00:03:54.431 CC lib/env_dpdk/pci_ioat.o 00:03:54.431 SYMLINK libspdk_json.so 00:03:54.431 CC lib/env_dpdk/pci_virtio.o 00:03:54.431 CC lib/env_dpdk/pci_vmd.o 00:03:54.431 LIB libspdk_idxd.a 00:03:54.431 CC lib/env_dpdk/pci_idxd.o 00:03:54.431 CC lib/env_dpdk/pci_event.o 00:03:54.431 SO libspdk_idxd.so.11.0 00:03:54.431 CC lib/jsonrpc/jsonrpc_server.o 00:03:54.431 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:54.431 SYMLINK libspdk_idxd.so 00:03:54.431 CC lib/jsonrpc/jsonrpc_client.o 00:03:54.431 LIB libspdk_vmd.a 00:03:54.431 CC lib/env_dpdk/sigbus_handler.o 00:03:54.431 SO libspdk_vmd.so.5.0 00:03:54.431 CC lib/env_dpdk/pci_dpdk.o 00:03:54.431 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:54.431 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:54.431 SYMLINK libspdk_vmd.so 00:03:54.431 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:54.431 LIB libspdk_jsonrpc.a 00:03:54.431 SO libspdk_jsonrpc.so.5.1 00:03:54.431 SYMLINK libspdk_jsonrpc.so 00:03:54.431 CC lib/rpc/rpc.o 00:03:54.694 LIB libspdk_env_dpdk.a 00:03:54.694 LIB libspdk_rpc.a 00:03:54.694 SO libspdk_env_dpdk.so.13.0 00:03:54.694 SO libspdk_rpc.so.5.0 00:03:54.694 SYMLINK libspdk_rpc.so 00:03:54.979 SYMLINK libspdk_env_dpdk.so 00:03:54.979 CC lib/sock/sock.o 00:03:54.979 CC lib/sock/sock_rpc.o 00:03:54.979 CC lib/trace/trace_rpc.o 00:03:54.979 CC lib/trace/trace.o 00:03:54.979 CC lib/trace/trace_flags.o 00:03:54.979 CC lib/notify/notify.o 00:03:54.979 CC lib/notify/notify_rpc.o 00:03:54.979 LIB libspdk_notify.a 00:03:55.250 SO libspdk_notify.so.5.0 00:03:55.250 LIB libspdk_trace.a 00:03:55.250 SYMLINK libspdk_notify.so 00:03:55.250 SO libspdk_trace.so.9.0 00:03:55.250 SYMLINK libspdk_trace.so 00:03:55.250 LIB libspdk_sock.a 00:03:55.250 SO libspdk_sock.so.8.0 00:03:55.250 SYMLINK libspdk_sock.so 00:03:55.508 CC lib/thread/iobuf.o 00:03:55.508 CC lib/thread/thread.o 00:03:55.508 CC lib/nvme/nvme_ctrlr.o 00:03:55.508 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:55.508 CC lib/nvme/nvme_fabric.o 00:03:55.508 CC lib/nvme/nvme_ns_cmd.o 00:03:55.508 CC lib/nvme/nvme_ns.o 00:03:55.508 CC lib/nvme/nvme_pcie.o 00:03:55.508 CC lib/nvme/nvme_pcie_common.o 00:03:55.508 CC lib/nvme/nvme_qpair.o 00:03:55.767 CC lib/nvme/nvme.o 00:03:56.335 CC lib/nvme/nvme_quirks.o 00:03:56.335 CC lib/nvme/nvme_transport.o 00:03:56.335 CC lib/nvme/nvme_discovery.o 00:03:56.335 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:56.335 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:56.335 CC lib/nvme/nvme_tcp.o 00:03:56.594 CC lib/nvme/nvme_opal.o 00:03:56.594 CC lib/nvme/nvme_io_msg.o 00:03:56.852 CC lib/nvme/nvme_poll_group.o 00:03:56.852 CC lib/nvme/nvme_zns.o 00:03:56.852 CC lib/nvme/nvme_cuse.o 00:03:56.852 CC lib/nvme/nvme_vfio_user.o 00:03:56.853 LIB libspdk_thread.a 00:03:56.853 SO libspdk_thread.so.9.0 00:03:56.853 CC lib/nvme/nvme_rdma.o 00:03:57.111 SYMLINK libspdk_thread.so 00:03:57.111 CC lib/accel/accel.o 00:03:57.111 CC lib/blob/blobstore.o 00:03:57.111 CC lib/accel/accel_rpc.o 00:03:57.370 CC lib/accel/accel_sw.o 00:03:57.370 CC lib/init/json_config.o 00:03:57.370 CC lib/virtio/virtio.o 00:03:57.370 CC lib/virtio/virtio_vhost_user.o 00:03:57.629 CC lib/virtio/virtio_vfio_user.o 00:03:57.629 CC lib/virtio/virtio_pci.o 00:03:57.629 CC lib/blob/request.o 00:03:57.629 CC lib/init/subsystem.o 00:03:57.629 CC lib/init/subsystem_rpc.o 00:03:57.629 CC lib/init/rpc.o 00:03:57.629 CC lib/blob/zeroes.o 00:03:57.629 CC lib/blob/blob_bs_dev.o 00:03:57.888 LIB libspdk_virtio.a 00:03:57.888 LIB libspdk_init.a 00:03:57.888 SO libspdk_virtio.so.6.0 00:03:57.888 SO libspdk_init.so.4.0 00:03:57.888 SYMLINK libspdk_virtio.so 00:03:57.888 SYMLINK libspdk_init.so 00:03:57.888 LIB libspdk_accel.a 00:03:58.146 SO libspdk_accel.so.14.0 00:03:58.146 SYMLINK libspdk_accel.so 00:03:58.146 CC lib/event/app.o 00:03:58.146 CC lib/event/reactor.o 00:03:58.146 CC lib/event/scheduler_static.o 00:03:58.146 CC lib/event/app_rpc.o 00:03:58.146 CC lib/event/log_rpc.o 00:03:58.146 LIB libspdk_nvme.a 00:03:58.146 CC lib/bdev/bdev.o 00:03:58.146 CC lib/bdev/bdev_zone.o 00:03:58.146 CC lib/bdev/bdev_rpc.o 00:03:58.146 CC lib/bdev/part.o 00:03:58.146 CC lib/bdev/scsi_nvme.o 00:03:58.405 SO libspdk_nvme.so.12.0 00:03:58.405 LIB libspdk_event.a 00:03:58.664 SO libspdk_event.so.12.0 00:03:58.664 SYMLINK libspdk_nvme.so 00:03:58.664 SYMLINK libspdk_event.so 00:03:59.598 LIB libspdk_blob.a 00:03:59.598 SO libspdk_blob.so.10.1 00:03:59.598 SYMLINK libspdk_blob.so 00:03:59.857 CC lib/lvol/lvol.o 00:03:59.857 CC lib/blobfs/blobfs.o 00:03:59.857 CC lib/blobfs/tree.o 00:04:00.424 LIB libspdk_bdev.a 00:04:00.683 SO libspdk_bdev.so.14.0 00:04:00.683 LIB libspdk_blobfs.a 00:04:00.683 LIB libspdk_lvol.a 00:04:00.683 SYMLINK libspdk_bdev.so 00:04:00.683 SO libspdk_blobfs.so.9.0 00:04:00.683 SO libspdk_lvol.so.9.1 00:04:00.683 SYMLINK libspdk_blobfs.so 00:04:00.683 SYMLINK libspdk_lvol.so 00:04:00.683 CC lib/nbd/nbd.o 00:04:00.683 CC lib/nbd/nbd_rpc.o 00:04:00.683 CC lib/ublk/ublk.o 00:04:00.683 CC lib/ublk/ublk_rpc.o 00:04:00.683 CC lib/scsi/dev.o 00:04:00.683 CC lib/scsi/lun.o 00:04:00.683 CC lib/scsi/port.o 00:04:00.683 CC lib/scsi/scsi.o 00:04:00.683 CC lib/nvmf/ctrlr.o 00:04:00.683 CC lib/ftl/ftl_core.o 00:04:00.942 CC lib/nvmf/ctrlr_discovery.o 00:04:00.942 CC lib/nvmf/ctrlr_bdev.o 00:04:00.942 CC lib/scsi/scsi_bdev.o 00:04:00.942 CC lib/scsi/scsi_pr.o 00:04:00.942 CC lib/scsi/scsi_rpc.o 00:04:00.942 CC lib/ftl/ftl_init.o 00:04:01.201 LIB libspdk_nbd.a 00:04:01.201 SO libspdk_nbd.so.6.0 00:04:01.201 CC lib/ftl/ftl_layout.o 00:04:01.201 CC lib/ftl/ftl_debug.o 00:04:01.201 SYMLINK libspdk_nbd.so 00:04:01.201 CC lib/scsi/task.o 00:04:01.201 CC lib/nvmf/subsystem.o 00:04:01.201 CC lib/nvmf/nvmf.o 00:04:01.460 LIB libspdk_ublk.a 00:04:01.460 CC lib/nvmf/nvmf_rpc.o 00:04:01.460 CC lib/nvmf/transport.o 00:04:01.460 SO libspdk_ublk.so.2.0 00:04:01.460 LIB libspdk_scsi.a 00:04:01.460 SYMLINK libspdk_ublk.so 00:04:01.460 CC lib/nvmf/tcp.o 00:04:01.460 CC lib/nvmf/rdma.o 00:04:01.460 CC lib/ftl/ftl_io.o 00:04:01.460 SO libspdk_scsi.so.8.0 00:04:01.460 CC lib/ftl/ftl_sb.o 00:04:01.719 SYMLINK libspdk_scsi.so 00:04:01.719 CC lib/ftl/ftl_l2p.o 00:04:01.719 CC lib/ftl/ftl_l2p_flat.o 00:04:01.719 CC lib/ftl/ftl_nv_cache.o 00:04:01.719 CC lib/ftl/ftl_band.o 00:04:01.978 CC lib/ftl/ftl_band_ops.o 00:04:01.978 CC lib/ftl/ftl_writer.o 00:04:01.978 CC lib/ftl/ftl_rq.o 00:04:02.237 CC lib/ftl/ftl_reloc.o 00:04:02.237 CC lib/ftl/ftl_l2p_cache.o 00:04:02.237 CC lib/ftl/ftl_p2l.o 00:04:02.237 CC lib/ftl/mngt/ftl_mngt.o 00:04:02.237 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:02.237 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:02.237 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:02.495 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:02.495 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:02.495 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:02.495 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:02.495 CC lib/iscsi/conn.o 00:04:02.495 CC lib/vhost/vhost.o 00:04:02.495 CC lib/vhost/vhost_rpc.o 00:04:02.754 CC lib/iscsi/init_grp.o 00:04:02.754 CC lib/iscsi/iscsi.o 00:04:02.754 CC lib/iscsi/md5.o 00:04:02.754 CC lib/iscsi/param.o 00:04:02.754 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:02.754 CC lib/iscsi/portal_grp.o 00:04:02.754 CC lib/iscsi/tgt_node.o 00:04:02.754 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:03.013 CC lib/vhost/vhost_scsi.o 00:04:03.013 CC lib/vhost/vhost_blk.o 00:04:03.013 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:03.013 CC lib/vhost/rte_vhost_user.o 00:04:03.013 CC lib/iscsi/iscsi_subsystem.o 00:04:03.272 CC lib/iscsi/iscsi_rpc.o 00:04:03.272 CC lib/iscsi/task.o 00:04:03.272 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:03.272 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:03.272 LIB libspdk_nvmf.a 00:04:03.272 SO libspdk_nvmf.so.17.0 00:04:03.272 CC lib/ftl/utils/ftl_conf.o 00:04:03.531 CC lib/ftl/utils/ftl_md.o 00:04:03.531 CC lib/ftl/utils/ftl_mempool.o 00:04:03.531 CC lib/ftl/utils/ftl_bitmap.o 00:04:03.531 SYMLINK libspdk_nvmf.so 00:04:03.531 CC lib/ftl/utils/ftl_property.o 00:04:03.531 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:03.531 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:03.531 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:03.531 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:03.789 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:03.790 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:03.790 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:03.790 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:03.790 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:03.790 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:03.790 CC lib/ftl/base/ftl_base_dev.o 00:04:03.790 CC lib/ftl/base/ftl_base_bdev.o 00:04:03.790 LIB libspdk_iscsi.a 00:04:03.790 CC lib/ftl/ftl_trace.o 00:04:04.049 SO libspdk_iscsi.so.7.0 00:04:04.049 LIB libspdk_vhost.a 00:04:04.049 SYMLINK libspdk_iscsi.so 00:04:04.049 SO libspdk_vhost.so.7.1 00:04:04.049 LIB libspdk_ftl.a 00:04:04.307 SYMLINK libspdk_vhost.so 00:04:04.308 SO libspdk_ftl.so.8.0 00:04:04.566 SYMLINK libspdk_ftl.so 00:04:04.825 CC module/env_dpdk/env_dpdk_rpc.o 00:04:04.825 CC module/blob/bdev/blob_bdev.o 00:04:04.825 CC module/sock/posix/posix.o 00:04:04.825 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:04.825 CC module/accel/ioat/accel_ioat.o 00:04:04.825 CC module/accel/dsa/accel_dsa.o 00:04:04.825 CC module/accel/error/accel_error.o 00:04:04.825 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:04.825 CC module/scheduler/gscheduler/gscheduler.o 00:04:04.825 CC module/accel/iaa/accel_iaa.o 00:04:04.825 LIB libspdk_env_dpdk_rpc.a 00:04:04.825 SO libspdk_env_dpdk_rpc.so.5.0 00:04:05.084 SYMLINK libspdk_env_dpdk_rpc.so 00:04:05.084 CC module/accel/error/accel_error_rpc.o 00:04:05.084 LIB libspdk_scheduler_gscheduler.a 00:04:05.084 LIB libspdk_scheduler_dpdk_governor.a 00:04:05.084 SO libspdk_scheduler_dpdk_governor.so.3.0 00:04:05.084 SO libspdk_scheduler_gscheduler.so.3.0 00:04:05.084 CC module/accel/ioat/accel_ioat_rpc.o 00:04:05.084 LIB libspdk_scheduler_dynamic.a 00:04:05.084 CC module/accel/dsa/accel_dsa_rpc.o 00:04:05.084 SYMLINK libspdk_scheduler_gscheduler.so 00:04:05.084 CC module/accel/iaa/accel_iaa_rpc.o 00:04:05.084 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:05.084 SO libspdk_scheduler_dynamic.so.3.0 00:04:05.084 LIB libspdk_blob_bdev.a 00:04:05.084 SYMLINK libspdk_scheduler_dynamic.so 00:04:05.084 SO libspdk_blob_bdev.so.10.1 00:04:05.084 LIB libspdk_accel_error.a 00:04:05.084 SYMLINK libspdk_blob_bdev.so 00:04:05.084 SO libspdk_accel_error.so.1.0 00:04:05.084 LIB libspdk_accel_ioat.a 00:04:05.084 LIB libspdk_accel_dsa.a 00:04:05.084 LIB libspdk_accel_iaa.a 00:04:05.084 SO libspdk_accel_ioat.so.5.0 00:04:05.084 SO libspdk_accel_dsa.so.4.0 00:04:05.084 SYMLINK libspdk_accel_error.so 00:04:05.084 SO libspdk_accel_iaa.so.2.0 00:04:05.343 SYMLINK libspdk_accel_dsa.so 00:04:05.343 SYMLINK libspdk_accel_ioat.so 00:04:05.343 SYMLINK libspdk_accel_iaa.so 00:04:05.343 CC module/bdev/lvol/vbdev_lvol.o 00:04:05.343 CC module/bdev/gpt/gpt.o 00:04:05.343 CC module/bdev/error/vbdev_error.o 00:04:05.343 CC module/bdev/delay/vbdev_delay.o 00:04:05.343 CC module/bdev/malloc/bdev_malloc.o 00:04:05.343 CC module/blobfs/bdev/blobfs_bdev.o 00:04:05.343 CC module/bdev/nvme/bdev_nvme.o 00:04:05.343 CC module/bdev/null/bdev_null.o 00:04:05.343 CC module/bdev/passthru/vbdev_passthru.o 00:04:05.602 LIB libspdk_sock_posix.a 00:04:05.602 CC module/bdev/gpt/vbdev_gpt.o 00:04:05.602 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:05.602 SO libspdk_sock_posix.so.5.0 00:04:05.602 CC module/bdev/error/vbdev_error_rpc.o 00:04:05.602 SYMLINK libspdk_sock_posix.so 00:04:05.602 CC module/bdev/null/bdev_null_rpc.o 00:04:05.602 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:05.602 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:05.602 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:05.602 LIB libspdk_blobfs_bdev.a 00:04:05.602 SO libspdk_blobfs_bdev.so.5.0 00:04:05.602 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:05.602 LIB libspdk_bdev_error.a 00:04:05.861 LIB libspdk_bdev_gpt.a 00:04:05.861 SYMLINK libspdk_blobfs_bdev.so 00:04:05.861 SO libspdk_bdev_error.so.5.0 00:04:05.861 LIB libspdk_bdev_null.a 00:04:05.861 LIB libspdk_bdev_delay.a 00:04:05.861 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:05.861 CC module/bdev/nvme/nvme_rpc.o 00:04:05.861 SO libspdk_bdev_gpt.so.5.0 00:04:05.861 SO libspdk_bdev_null.so.5.0 00:04:05.861 SO libspdk_bdev_delay.so.5.0 00:04:05.861 LIB libspdk_bdev_malloc.a 00:04:05.861 LIB libspdk_bdev_passthru.a 00:04:05.861 SYMLINK libspdk_bdev_error.so 00:04:05.861 SYMLINK libspdk_bdev_delay.so 00:04:05.861 SYMLINK libspdk_bdev_gpt.so 00:04:05.861 SO libspdk_bdev_passthru.so.5.0 00:04:05.861 SO libspdk_bdev_malloc.so.5.0 00:04:05.861 SYMLINK libspdk_bdev_null.so 00:04:05.861 CC module/bdev/nvme/bdev_mdns_client.o 00:04:05.861 SYMLINK libspdk_bdev_passthru.so 00:04:05.861 SYMLINK libspdk_bdev_malloc.so 00:04:05.861 CC module/bdev/nvme/vbdev_opal.o 00:04:05.861 CC module/bdev/raid/bdev_raid.o 00:04:05.861 CC module/bdev/split/vbdev_split.o 00:04:05.861 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:06.119 CC module/bdev/split/vbdev_split_rpc.o 00:04:06.119 CC module/bdev/aio/bdev_aio.o 00:04:06.119 LIB libspdk_bdev_lvol.a 00:04:06.119 SO libspdk_bdev_lvol.so.5.0 00:04:06.119 CC module/bdev/aio/bdev_aio_rpc.o 00:04:06.119 SYMLINK libspdk_bdev_lvol.so 00:04:06.119 CC module/bdev/raid/bdev_raid_rpc.o 00:04:06.119 CC module/bdev/raid/bdev_raid_sb.o 00:04:06.119 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:06.119 LIB libspdk_bdev_split.a 00:04:06.378 CC module/bdev/raid/raid0.o 00:04:06.378 SO libspdk_bdev_split.so.5.0 00:04:06.378 CC module/bdev/raid/raid1.o 00:04:06.378 CC module/bdev/raid/concat.o 00:04:06.378 LIB libspdk_bdev_aio.a 00:04:06.378 SYMLINK libspdk_bdev_split.so 00:04:06.378 SO libspdk_bdev_aio.so.5.0 00:04:06.378 LIB libspdk_bdev_zone_block.a 00:04:06.378 SO libspdk_bdev_zone_block.so.5.0 00:04:06.378 SYMLINK libspdk_bdev_aio.so 00:04:06.378 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:06.378 CC module/bdev/ftl/bdev_ftl.o 00:04:06.378 SYMLINK libspdk_bdev_zone_block.so 00:04:06.378 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:06.378 CC module/bdev/iscsi/bdev_iscsi.o 00:04:06.378 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:06.378 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:06.636 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:06.636 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:06.636 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:06.636 LIB libspdk_bdev_raid.a 00:04:06.895 LIB libspdk_bdev_ftl.a 00:04:06.895 LIB libspdk_bdev_iscsi.a 00:04:06.895 SO libspdk_bdev_raid.so.5.0 00:04:06.895 SO libspdk_bdev_ftl.so.5.0 00:04:06.895 SO libspdk_bdev_iscsi.so.5.0 00:04:06.895 SYMLINK libspdk_bdev_ftl.so 00:04:06.895 SYMLINK libspdk_bdev_raid.so 00:04:06.895 SYMLINK libspdk_bdev_iscsi.so 00:04:06.895 LIB libspdk_bdev_virtio.a 00:04:06.895 SO libspdk_bdev_virtio.so.5.0 00:04:07.153 SYMLINK libspdk_bdev_virtio.so 00:04:07.410 LIB libspdk_bdev_nvme.a 00:04:07.410 SO libspdk_bdev_nvme.so.6.0 00:04:07.410 SYMLINK libspdk_bdev_nvme.so 00:04:07.977 CC module/event/subsystems/scheduler/scheduler.o 00:04:07.977 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:07.977 CC module/event/subsystems/sock/sock.o 00:04:07.977 CC module/event/subsystems/vmd/vmd.o 00:04:07.977 CC module/event/subsystems/iobuf/iobuf.o 00:04:07.977 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:07.977 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:07.977 LIB libspdk_event_scheduler.a 00:04:07.977 LIB libspdk_event_sock.a 00:04:07.977 LIB libspdk_event_vhost_blk.a 00:04:07.977 LIB libspdk_event_vmd.a 00:04:07.977 SO libspdk_event_sock.so.4.0 00:04:07.977 SO libspdk_event_scheduler.so.3.0 00:04:07.977 SO libspdk_event_vhost_blk.so.2.0 00:04:07.977 SO libspdk_event_vmd.so.5.0 00:04:07.977 LIB libspdk_event_iobuf.a 00:04:07.977 SYMLINK libspdk_event_vhost_blk.so 00:04:07.977 SYMLINK libspdk_event_scheduler.so 00:04:07.977 SYMLINK libspdk_event_sock.so 00:04:07.977 SYMLINK libspdk_event_vmd.so 00:04:07.977 SO libspdk_event_iobuf.so.2.0 00:04:07.977 SYMLINK libspdk_event_iobuf.so 00:04:08.235 CC module/event/subsystems/accel/accel.o 00:04:08.493 LIB libspdk_event_accel.a 00:04:08.493 SO libspdk_event_accel.so.5.0 00:04:08.493 SYMLINK libspdk_event_accel.so 00:04:08.751 CC module/event/subsystems/bdev/bdev.o 00:04:08.751 LIB libspdk_event_bdev.a 00:04:08.751 SO libspdk_event_bdev.so.5.0 00:04:09.010 SYMLINK libspdk_event_bdev.so 00:04:09.010 CC module/event/subsystems/ublk/ublk.o 00:04:09.010 CC module/event/subsystems/scsi/scsi.o 00:04:09.010 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:09.010 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:09.010 CC module/event/subsystems/nbd/nbd.o 00:04:09.268 LIB libspdk_event_ublk.a 00:04:09.268 LIB libspdk_event_nbd.a 00:04:09.268 LIB libspdk_event_scsi.a 00:04:09.268 SO libspdk_event_nbd.so.5.0 00:04:09.268 SO libspdk_event_ublk.so.2.0 00:04:09.268 SO libspdk_event_scsi.so.5.0 00:04:09.268 SYMLINK libspdk_event_nbd.so 00:04:09.268 SYMLINK libspdk_event_scsi.so 00:04:09.268 SYMLINK libspdk_event_ublk.so 00:04:09.268 LIB libspdk_event_nvmf.a 00:04:09.527 SO libspdk_event_nvmf.so.5.0 00:04:09.527 SYMLINK libspdk_event_nvmf.so 00:04:09.527 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:09.527 CC module/event/subsystems/iscsi/iscsi.o 00:04:09.527 LIB libspdk_event_vhost_scsi.a 00:04:09.785 SO libspdk_event_vhost_scsi.so.2.0 00:04:09.785 LIB libspdk_event_iscsi.a 00:04:09.785 SYMLINK libspdk_event_vhost_scsi.so 00:04:09.785 SO libspdk_event_iscsi.so.5.0 00:04:09.785 SYMLINK libspdk_event_iscsi.so 00:04:10.043 SO libspdk.so.5.0 00:04:10.043 SYMLINK libspdk.so 00:04:10.043 CXX app/trace/trace.o 00:04:10.043 CC examples/sock/hello_world/hello_sock.o 00:04:10.301 CC examples/ioat/perf/perf.o 00:04:10.302 CC examples/vmd/lsvmd/lsvmd.o 00:04:10.302 CC examples/accel/perf/accel_perf.o 00:04:10.302 CC examples/nvme/hello_world/hello_world.o 00:04:10.302 CC examples/bdev/hello_world/hello_bdev.o 00:04:10.302 CC examples/blob/hello_world/hello_blob.o 00:04:10.302 CC examples/nvmf/nvmf/nvmf.o 00:04:10.302 CC test/accel/dif/dif.o 00:04:10.302 LINK lsvmd 00:04:10.560 LINK ioat_perf 00:04:10.560 LINK hello_world 00:04:10.560 LINK hello_sock 00:04:10.560 LINK nvmf 00:04:10.560 LINK hello_bdev 00:04:10.560 LINK hello_blob 00:04:10.560 LINK spdk_trace 00:04:10.560 CC examples/vmd/led/led.o 00:04:10.560 LINK dif 00:04:10.560 CC examples/ioat/verify/verify.o 00:04:10.560 LINK accel_perf 00:04:10.560 CC examples/nvme/reconnect/reconnect.o 00:04:10.819 LINK led 00:04:10.819 CC examples/util/zipf/zipf.o 00:04:10.819 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:10.819 CC app/trace_record/trace_record.o 00:04:10.819 CC examples/blob/cli/blobcli.o 00:04:10.819 CC examples/bdev/bdevperf/bdevperf.o 00:04:10.819 LINK verify 00:04:10.819 LINK zipf 00:04:10.819 CC test/bdev/bdevio/bdevio.o 00:04:10.819 CC test/app/bdev_svc/bdev_svc.o 00:04:11.078 LINK reconnect 00:04:11.078 CC examples/thread/thread/thread_ex.o 00:04:11.078 LINK spdk_trace_record 00:04:11.078 CC app/nvmf_tgt/nvmf_main.o 00:04:11.078 LINK bdev_svc 00:04:11.078 CC examples/idxd/perf/perf.o 00:04:11.078 LINK nvme_manage 00:04:11.078 CC app/iscsi_tgt/iscsi_tgt.o 00:04:11.355 LINK thread 00:04:11.355 LINK blobcli 00:04:11.355 LINK nvmf_tgt 00:04:11.355 LINK bdevio 00:04:11.355 CC examples/nvme/arbitration/arbitration.o 00:04:11.355 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:11.355 LINK iscsi_tgt 00:04:11.355 LINK idxd_perf 00:04:11.625 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:11.625 LINK bdevperf 00:04:11.625 CC test/blobfs/mkfs/mkfs.o 00:04:11.625 CC app/spdk_lspci/spdk_lspci.o 00:04:11.625 CC app/spdk_tgt/spdk_tgt.o 00:04:11.625 CC app/spdk_nvme_perf/perf.o 00:04:11.625 TEST_HEADER include/spdk/accel.h 00:04:11.625 TEST_HEADER include/spdk/accel_module.h 00:04:11.625 TEST_HEADER include/spdk/assert.h 00:04:11.625 TEST_HEADER include/spdk/barrier.h 00:04:11.625 LINK interrupt_tgt 00:04:11.625 LINK arbitration 00:04:11.625 TEST_HEADER include/spdk/base64.h 00:04:11.625 TEST_HEADER include/spdk/bdev.h 00:04:11.625 TEST_HEADER include/spdk/bdev_module.h 00:04:11.625 TEST_HEADER include/spdk/bdev_zone.h 00:04:11.625 TEST_HEADER include/spdk/bit_array.h 00:04:11.625 TEST_HEADER include/spdk/bit_pool.h 00:04:11.625 TEST_HEADER include/spdk/blob_bdev.h 00:04:11.625 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:11.625 TEST_HEADER include/spdk/blobfs.h 00:04:11.625 TEST_HEADER include/spdk/blob.h 00:04:11.625 TEST_HEADER include/spdk/conf.h 00:04:11.625 TEST_HEADER include/spdk/config.h 00:04:11.625 TEST_HEADER include/spdk/cpuset.h 00:04:11.625 TEST_HEADER include/spdk/crc16.h 00:04:11.625 TEST_HEADER include/spdk/crc32.h 00:04:11.625 LINK spdk_lspci 00:04:11.625 TEST_HEADER include/spdk/crc64.h 00:04:11.625 TEST_HEADER include/spdk/dif.h 00:04:11.625 TEST_HEADER include/spdk/dma.h 00:04:11.625 TEST_HEADER include/spdk/endian.h 00:04:11.625 TEST_HEADER include/spdk/env_dpdk.h 00:04:11.625 TEST_HEADER include/spdk/env.h 00:04:11.625 TEST_HEADER include/spdk/event.h 00:04:11.626 TEST_HEADER include/spdk/fd_group.h 00:04:11.626 TEST_HEADER include/spdk/fd.h 00:04:11.626 TEST_HEADER include/spdk/file.h 00:04:11.626 TEST_HEADER include/spdk/ftl.h 00:04:11.626 TEST_HEADER include/spdk/gpt_spec.h 00:04:11.626 TEST_HEADER include/spdk/hexlify.h 00:04:11.626 TEST_HEADER include/spdk/histogram_data.h 00:04:11.626 TEST_HEADER include/spdk/idxd.h 00:04:11.626 TEST_HEADER include/spdk/idxd_spec.h 00:04:11.626 TEST_HEADER include/spdk/init.h 00:04:11.626 TEST_HEADER include/spdk/ioat.h 00:04:11.626 TEST_HEADER include/spdk/ioat_spec.h 00:04:11.626 TEST_HEADER include/spdk/iscsi_spec.h 00:04:11.626 TEST_HEADER include/spdk/json.h 00:04:11.626 TEST_HEADER include/spdk/jsonrpc.h 00:04:11.626 TEST_HEADER include/spdk/likely.h 00:04:11.626 TEST_HEADER include/spdk/log.h 00:04:11.626 LINK mkfs 00:04:11.626 TEST_HEADER include/spdk/lvol.h 00:04:11.626 TEST_HEADER include/spdk/memory.h 00:04:11.626 TEST_HEADER include/spdk/mmio.h 00:04:11.885 TEST_HEADER include/spdk/nbd.h 00:04:11.885 TEST_HEADER include/spdk/notify.h 00:04:11.885 TEST_HEADER include/spdk/nvme.h 00:04:11.885 TEST_HEADER include/spdk/nvme_intel.h 00:04:11.885 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:11.885 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:11.885 TEST_HEADER include/spdk/nvme_spec.h 00:04:11.885 TEST_HEADER include/spdk/nvme_zns.h 00:04:11.885 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:11.885 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:11.885 TEST_HEADER include/spdk/nvmf.h 00:04:11.885 TEST_HEADER include/spdk/nvmf_spec.h 00:04:11.885 LINK nvme_fuzz 00:04:11.885 TEST_HEADER include/spdk/nvmf_transport.h 00:04:11.885 TEST_HEADER include/spdk/opal.h 00:04:11.885 TEST_HEADER include/spdk/opal_spec.h 00:04:11.885 TEST_HEADER include/spdk/pci_ids.h 00:04:11.885 TEST_HEADER include/spdk/pipe.h 00:04:11.885 TEST_HEADER include/spdk/queue.h 00:04:11.885 TEST_HEADER include/spdk/reduce.h 00:04:11.885 TEST_HEADER include/spdk/rpc.h 00:04:11.885 TEST_HEADER include/spdk/scheduler.h 00:04:11.885 TEST_HEADER include/spdk/scsi.h 00:04:11.885 LINK spdk_tgt 00:04:11.885 TEST_HEADER include/spdk/scsi_spec.h 00:04:11.885 TEST_HEADER include/spdk/sock.h 00:04:11.885 TEST_HEADER include/spdk/stdinc.h 00:04:11.885 TEST_HEADER include/spdk/string.h 00:04:11.885 TEST_HEADER include/spdk/thread.h 00:04:11.885 TEST_HEADER include/spdk/trace.h 00:04:11.885 TEST_HEADER include/spdk/trace_parser.h 00:04:11.885 TEST_HEADER include/spdk/tree.h 00:04:11.885 TEST_HEADER include/spdk/ublk.h 00:04:11.885 TEST_HEADER include/spdk/util.h 00:04:11.885 TEST_HEADER include/spdk/uuid.h 00:04:11.885 TEST_HEADER include/spdk/version.h 00:04:11.885 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:11.885 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:11.885 TEST_HEADER include/spdk/vhost.h 00:04:11.885 TEST_HEADER include/spdk/vmd.h 00:04:11.885 TEST_HEADER include/spdk/xor.h 00:04:11.885 TEST_HEADER include/spdk/zipf.h 00:04:11.885 CXX test/cpp_headers/accel.o 00:04:11.885 CC test/dma/test_dma/test_dma.o 00:04:11.885 CC examples/nvme/hotplug/hotplug.o 00:04:11.885 CC app/spdk_nvme_identify/identify.o 00:04:11.885 CC app/spdk_nvme_discover/discovery_aer.o 00:04:11.885 CXX test/cpp_headers/accel_module.o 00:04:11.885 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:11.885 CXX test/cpp_headers/assert.o 00:04:12.144 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:12.144 LINK hotplug 00:04:12.144 CXX test/cpp_headers/barrier.o 00:04:12.144 LINK spdk_nvme_discover 00:04:12.403 LINK cmb_copy 00:04:12.403 LINK test_dma 00:04:12.403 CC test/app/histogram_perf/histogram_perf.o 00:04:12.403 CXX test/cpp_headers/base64.o 00:04:12.403 CC app/spdk_top/spdk_top.o 00:04:12.403 LINK spdk_nvme_perf 00:04:12.403 CC app/vhost/vhost.o 00:04:12.403 LINK histogram_perf 00:04:12.403 CC examples/nvme/abort/abort.o 00:04:12.661 CXX test/cpp_headers/bdev.o 00:04:12.661 LINK vhost 00:04:12.661 CC test/env/vtophys/vtophys.o 00:04:12.661 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:12.661 CC test/env/mem_callbacks/mem_callbacks.o 00:04:12.661 CXX test/cpp_headers/bdev_module.o 00:04:12.661 LINK spdk_nvme_identify 00:04:12.920 LINK vtophys 00:04:12.920 LINK env_dpdk_post_init 00:04:12.920 LINK abort 00:04:12.920 CXX test/cpp_headers/bdev_zone.o 00:04:12.920 CC test/app/jsoncat/jsoncat.o 00:04:12.920 CC test/app/stub/stub.o 00:04:12.920 LINK jsoncat 00:04:13.179 CC app/spdk_dd/spdk_dd.o 00:04:13.179 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:13.179 CXX test/cpp_headers/bit_array.o 00:04:13.179 LINK stub 00:04:13.179 CC app/fio/nvme/fio_plugin.o 00:04:13.179 CXX test/cpp_headers/bit_pool.o 00:04:13.179 LINK pmr_persistence 00:04:13.179 LINK spdk_top 00:04:13.179 CC test/env/memory/memory_ut.o 00:04:13.179 LINK mem_callbacks 00:04:13.179 CC test/env/pci/pci_ut.o 00:04:13.438 CXX test/cpp_headers/blob_bdev.o 00:04:13.438 LINK spdk_dd 00:04:13.438 CC app/fio/bdev/fio_plugin.o 00:04:13.438 CC test/event/event_perf/event_perf.o 00:04:13.438 LINK iscsi_fuzz 00:04:13.697 LINK spdk_nvme 00:04:13.697 CC test/lvol/esnap/esnap.o 00:04:13.697 CXX test/cpp_headers/blobfs_bdev.o 00:04:13.697 CXX test/cpp_headers/blobfs.o 00:04:13.697 CXX test/cpp_headers/blob.o 00:04:13.697 LINK event_perf 00:04:13.697 LINK pci_ut 00:04:13.956 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:13.956 CXX test/cpp_headers/conf.o 00:04:13.956 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:13.956 CXX test/cpp_headers/config.o 00:04:13.956 CXX test/cpp_headers/cpuset.o 00:04:13.956 CC test/event/reactor/reactor.o 00:04:13.956 LINK spdk_bdev 00:04:13.956 CXX test/cpp_headers/crc16.o 00:04:14.215 CC test/rpc_client/rpc_client_test.o 00:04:14.215 LINK memory_ut 00:04:14.215 LINK reactor 00:04:14.215 CC test/nvme/aer/aer.o 00:04:14.215 CXX test/cpp_headers/crc32.o 00:04:14.215 CC test/event/reactor_perf/reactor_perf.o 00:04:14.215 CC test/thread/poller_perf/poller_perf.o 00:04:14.215 LINK vhost_fuzz 00:04:14.215 LINK rpc_client_test 00:04:14.474 LINK reactor_perf 00:04:14.474 CC test/event/app_repeat/app_repeat.o 00:04:14.474 CXX test/cpp_headers/crc64.o 00:04:14.474 LINK poller_perf 00:04:14.474 CC test/event/scheduler/scheduler.o 00:04:14.474 LINK aer 00:04:14.474 CXX test/cpp_headers/dif.o 00:04:14.474 CC test/nvme/reset/reset.o 00:04:14.474 CXX test/cpp_headers/dma.o 00:04:14.474 LINK app_repeat 00:04:14.474 CXX test/cpp_headers/endian.o 00:04:14.474 CC test/nvme/sgl/sgl.o 00:04:14.733 CXX test/cpp_headers/env_dpdk.o 00:04:14.733 CXX test/cpp_headers/env.o 00:04:14.733 CXX test/cpp_headers/event.o 00:04:14.733 LINK scheduler 00:04:14.733 CXX test/cpp_headers/fd_group.o 00:04:14.733 CC test/nvme/e2edp/nvme_dp.o 00:04:14.991 LINK reset 00:04:14.991 CXX test/cpp_headers/fd.o 00:04:14.991 LINK sgl 00:04:14.991 CXX test/cpp_headers/file.o 00:04:14.991 CXX test/cpp_headers/ftl.o 00:04:14.991 CC test/nvme/overhead/overhead.o 00:04:15.250 CC test/nvme/err_injection/err_injection.o 00:04:15.250 CXX test/cpp_headers/gpt_spec.o 00:04:15.250 LINK nvme_dp 00:04:15.250 CC test/nvme/startup/startup.o 00:04:15.250 CC test/nvme/reserve/reserve.o 00:04:15.250 CC test/nvme/simple_copy/simple_copy.o 00:04:15.250 CXX test/cpp_headers/hexlify.o 00:04:15.510 LINK startup 00:04:15.510 CC test/nvme/connect_stress/connect_stress.o 00:04:15.510 CXX test/cpp_headers/histogram_data.o 00:04:15.510 LINK err_injection 00:04:15.510 LINK overhead 00:04:15.510 LINK reserve 00:04:15.510 CXX test/cpp_headers/idxd.o 00:04:15.510 LINK simple_copy 00:04:15.768 CXX test/cpp_headers/idxd_spec.o 00:04:15.768 CXX test/cpp_headers/init.o 00:04:15.768 CXX test/cpp_headers/ioat.o 00:04:15.768 CXX test/cpp_headers/ioat_spec.o 00:04:15.768 LINK connect_stress 00:04:15.768 CC test/nvme/boot_partition/boot_partition.o 00:04:15.768 CXX test/cpp_headers/iscsi_spec.o 00:04:15.768 CXX test/cpp_headers/json.o 00:04:15.768 CC test/nvme/compliance/nvme_compliance.o 00:04:15.768 CXX test/cpp_headers/jsonrpc.o 00:04:16.027 CXX test/cpp_headers/likely.o 00:04:16.027 LINK boot_partition 00:04:16.027 CC test/nvme/fused_ordering/fused_ordering.o 00:04:16.027 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:16.027 CXX test/cpp_headers/log.o 00:04:16.027 CC test/nvme/fdp/fdp.o 00:04:16.027 CXX test/cpp_headers/lvol.o 00:04:16.027 CXX test/cpp_headers/memory.o 00:04:16.027 CXX test/cpp_headers/mmio.o 00:04:16.027 LINK fused_ordering 00:04:16.285 LINK nvme_compliance 00:04:16.285 LINK doorbell_aers 00:04:16.285 CXX test/cpp_headers/nbd.o 00:04:16.285 CXX test/cpp_headers/notify.o 00:04:16.285 CC test/nvme/cuse/cuse.o 00:04:16.285 CXX test/cpp_headers/nvme.o 00:04:16.285 CXX test/cpp_headers/nvme_intel.o 00:04:16.285 CXX test/cpp_headers/nvme_ocssd.o 00:04:16.285 LINK fdp 00:04:16.285 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:16.285 CXX test/cpp_headers/nvme_spec.o 00:04:16.544 CXX test/cpp_headers/nvme_zns.o 00:04:16.544 CXX test/cpp_headers/nvmf_cmd.o 00:04:16.544 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:16.544 CXX test/cpp_headers/nvmf.o 00:04:16.544 CXX test/cpp_headers/nvmf_spec.o 00:04:16.544 CXX test/cpp_headers/nvmf_transport.o 00:04:16.544 CXX test/cpp_headers/opal.o 00:04:16.544 CXX test/cpp_headers/opal_spec.o 00:04:16.544 CXX test/cpp_headers/pci_ids.o 00:04:16.544 CXX test/cpp_headers/pipe.o 00:04:16.802 CXX test/cpp_headers/queue.o 00:04:16.802 CXX test/cpp_headers/reduce.o 00:04:16.802 CXX test/cpp_headers/rpc.o 00:04:16.802 CXX test/cpp_headers/scheduler.o 00:04:16.802 CXX test/cpp_headers/scsi.o 00:04:16.802 CXX test/cpp_headers/scsi_spec.o 00:04:16.802 CXX test/cpp_headers/sock.o 00:04:16.802 CXX test/cpp_headers/stdinc.o 00:04:16.802 CXX test/cpp_headers/string.o 00:04:16.802 CXX test/cpp_headers/thread.o 00:04:16.802 CXX test/cpp_headers/trace.o 00:04:17.061 CXX test/cpp_headers/trace_parser.o 00:04:17.061 CXX test/cpp_headers/tree.o 00:04:17.061 CXX test/cpp_headers/ublk.o 00:04:17.061 CXX test/cpp_headers/util.o 00:04:17.061 CXX test/cpp_headers/uuid.o 00:04:17.061 CXX test/cpp_headers/version.o 00:04:17.061 CXX test/cpp_headers/vfio_user_pci.o 00:04:17.061 CXX test/cpp_headers/vfio_user_spec.o 00:04:17.061 CXX test/cpp_headers/vhost.o 00:04:17.061 CXX test/cpp_headers/vmd.o 00:04:17.061 CXX test/cpp_headers/xor.o 00:04:17.061 CXX test/cpp_headers/zipf.o 00:04:17.319 LINK cuse 00:04:18.254 LINK esnap 00:04:18.512 00:04:18.512 real 0m48.024s 00:04:18.512 user 4m34.697s 00:04:18.513 sys 1m3.273s 00:04:18.513 07:10:20 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:04:18.513 07:10:20 -- common/autotest_common.sh@10 -- $ set +x 00:04:18.513 ************************************ 00:04:18.513 END TEST make 00:04:18.513 ************************************ 00:04:18.771 07:10:20 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:18.771 07:10:20 -- nvmf/common.sh@7 -- # uname -s 00:04:18.771 07:10:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:18.771 07:10:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:18.771 07:10:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:18.771 07:10:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:18.771 07:10:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:18.771 07:10:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:18.771 07:10:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:18.771 07:10:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:18.771 07:10:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:18.771 07:10:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:18.771 07:10:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:04:18.771 07:10:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:04:18.771 07:10:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:18.771 07:10:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:18.771 07:10:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:18.771 07:10:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:18.771 07:10:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:18.771 07:10:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:18.771 07:10:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:18.771 07:10:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.771 07:10:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.771 07:10:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.771 07:10:20 -- paths/export.sh@5 -- # export PATH 00:04:18.771 07:10:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.771 07:10:20 -- nvmf/common.sh@46 -- # : 0 00:04:18.771 07:10:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:18.771 07:10:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:18.771 07:10:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:18.771 07:10:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:18.771 07:10:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:18.771 07:10:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:18.771 07:10:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:18.771 07:10:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:18.771 07:10:20 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:18.771 07:10:20 -- spdk/autotest.sh@32 -- # uname -s 00:04:18.771 07:10:20 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:18.771 07:10:20 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:18.771 07:10:20 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:18.771 07:10:20 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:18.771 07:10:20 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:18.771 07:10:20 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:18.771 07:10:20 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:18.771 07:10:20 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:18.771 07:10:20 -- spdk/autotest.sh@48 -- # udevadm_pid=61816 00:04:18.771 07:10:20 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:18.771 07:10:20 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:18.772 07:10:20 -- spdk/autotest.sh@54 -- # echo 61829 00:04:18.772 07:10:20 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:18.772 07:10:20 -- spdk/autotest.sh@56 -- # echo 61832 00:04:18.772 07:10:20 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:18.772 07:10:20 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:18.772 07:10:20 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:18.772 07:10:20 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:18.772 07:10:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:18.772 07:10:20 -- common/autotest_common.sh@10 -- # set +x 00:04:18.772 07:10:20 -- spdk/autotest.sh@70 -- # create_test_list 00:04:18.772 07:10:20 -- common/autotest_common.sh@736 -- # xtrace_disable 00:04:18.772 07:10:20 -- common/autotest_common.sh@10 -- # set +x 00:04:18.772 07:10:20 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:18.772 07:10:20 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:18.772 07:10:20 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:18.772 07:10:20 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:18.772 07:10:20 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:18.772 07:10:20 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:18.772 07:10:20 -- common/autotest_common.sh@1440 -- # uname 00:04:18.772 07:10:20 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:04:18.772 07:10:20 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:18.772 07:10:20 -- common/autotest_common.sh@1460 -- # uname 00:04:18.772 07:10:20 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:04:18.772 07:10:20 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:04:19.030 07:10:20 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:04:19.030 07:10:20 -- spdk/autotest.sh@83 -- # hash lcov 00:04:19.030 07:10:20 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:19.030 07:10:20 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:04:19.030 --rc lcov_branch_coverage=1 00:04:19.030 --rc lcov_function_coverage=1 00:04:19.030 --rc genhtml_branch_coverage=1 00:04:19.030 --rc genhtml_function_coverage=1 00:04:19.030 --rc genhtml_legend=1 00:04:19.030 --rc geninfo_all_blocks=1 00:04:19.030 ' 00:04:19.030 07:10:20 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:04:19.030 --rc lcov_branch_coverage=1 00:04:19.030 --rc lcov_function_coverage=1 00:04:19.030 --rc genhtml_branch_coverage=1 00:04:19.030 --rc genhtml_function_coverage=1 00:04:19.030 --rc genhtml_legend=1 00:04:19.030 --rc geninfo_all_blocks=1 00:04:19.030 ' 00:04:19.030 07:10:20 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:04:19.030 --rc lcov_branch_coverage=1 00:04:19.030 --rc lcov_function_coverage=1 00:04:19.030 --rc genhtml_branch_coverage=1 00:04:19.030 --rc genhtml_function_coverage=1 00:04:19.030 --rc genhtml_legend=1 00:04:19.030 --rc geninfo_all_blocks=1 00:04:19.030 --no-external' 00:04:19.030 07:10:20 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:04:19.030 --rc lcov_branch_coverage=1 00:04:19.030 --rc lcov_function_coverage=1 00:04:19.030 --rc genhtml_branch_coverage=1 00:04:19.030 --rc genhtml_function_coverage=1 00:04:19.030 --rc genhtml_legend=1 00:04:19.030 --rc geninfo_all_blocks=1 00:04:19.030 --no-external' 00:04:19.030 07:10:20 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:19.030 lcov: LCOV version 1.15 00:04:19.030 07:10:20 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:25.588 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:25.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:25.588 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:25.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:25.588 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:25.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:40.464 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:40.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:40.465 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:40.465 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:40.465 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:40.465 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:40.465 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:40.465 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:40.465 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:40.465 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:40.465 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:40.465 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:40.465 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:40.465 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:40.465 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:40.465 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:40.465 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:40.465 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:40.465 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:40.465 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:40.465 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:40.465 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:40.465 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:40.465 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:40.465 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:40.465 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:40.465 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:40.465 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:40.465 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:40.465 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:40.465 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:40.465 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:40.465 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:40.465 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:40.465 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:40.465 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:40.465 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:40.465 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:40.465 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:40.465 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:40.465 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:40.465 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:40.465 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:40.465 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:40.465 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:40.465 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:40.465 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:40.465 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:40.465 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:40.465 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:40.465 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:40.465 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:40.465 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:40.465 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:40.465 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:40.465 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:40.465 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:40.465 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:40.465 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:40.465 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:40.465 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:40.465 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:40.465 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:40.465 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:40.465 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:40.465 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:40.465 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:40.465 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:40.465 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:40.465 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:40.465 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:40.465 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:40.723 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:40.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:40.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:40.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:40.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:40.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:40.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:40.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:40.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:40.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:40.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:40.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:40.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:40.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:40.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:40.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:43.256 07:10:45 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:04:43.256 07:10:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:43.256 07:10:45 -- common/autotest_common.sh@10 -- # set +x 00:04:43.256 07:10:45 -- spdk/autotest.sh@102 -- # rm -f 00:04:43.256 07:10:45 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:44.228 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:44.228 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:44.228 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:04:44.228 07:10:45 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:04:44.228 07:10:45 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:44.228 07:10:45 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:44.228 07:10:45 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:44.228 07:10:45 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:44.228 07:10:45 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:44.228 07:10:45 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:44.228 07:10:45 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:44.228 07:10:45 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:44.228 07:10:45 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:44.228 07:10:45 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:04:44.228 07:10:45 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:04:44.228 07:10:45 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:44.228 07:10:45 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:44.228 07:10:45 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:44.228 07:10:45 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:04:44.228 07:10:45 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:04:44.228 07:10:45 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:44.228 07:10:45 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:44.228 07:10:45 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:44.228 07:10:45 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:04:44.228 07:10:45 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:04:44.228 07:10:45 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:44.228 07:10:45 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:44.228 07:10:45 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:04:44.228 07:10:45 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:04:44.228 07:10:45 -- spdk/autotest.sh@121 -- # grep -v p 00:04:44.228 07:10:45 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:44.228 07:10:45 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:44.228 07:10:45 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:04:44.228 07:10:45 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:44.228 07:10:45 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:44.228 No valid GPT data, bailing 00:04:44.228 07:10:45 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:44.228 07:10:45 -- scripts/common.sh@393 -- # pt= 00:04:44.228 07:10:45 -- scripts/common.sh@394 -- # return 1 00:04:44.228 07:10:45 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:44.228 1+0 records in 00:04:44.228 1+0 records out 00:04:44.228 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00503467 s, 208 MB/s 00:04:44.228 07:10:45 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:44.228 07:10:45 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:44.228 07:10:45 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n1 00:04:44.229 07:10:45 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:04:44.229 07:10:45 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:44.229 No valid GPT data, bailing 00:04:44.229 07:10:46 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:44.229 07:10:46 -- scripts/common.sh@393 -- # pt= 00:04:44.229 07:10:46 -- scripts/common.sh@394 -- # return 1 00:04:44.229 07:10:46 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:44.229 1+0 records in 00:04:44.229 1+0 records out 00:04:44.229 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00450728 s, 233 MB/s 00:04:44.229 07:10:46 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:44.229 07:10:46 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:44.229 07:10:46 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n2 00:04:44.229 07:10:46 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:04:44.229 07:10:46 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:44.487 No valid GPT data, bailing 00:04:44.487 07:10:46 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:44.487 07:10:46 -- scripts/common.sh@393 -- # pt= 00:04:44.487 07:10:46 -- scripts/common.sh@394 -- # return 1 00:04:44.487 07:10:46 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:44.487 1+0 records in 00:04:44.487 1+0 records out 00:04:44.487 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00465853 s, 225 MB/s 00:04:44.487 07:10:46 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:44.487 07:10:46 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:44.487 07:10:46 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n3 00:04:44.487 07:10:46 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:04:44.487 07:10:46 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:44.487 No valid GPT data, bailing 00:04:44.487 07:10:46 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:44.487 07:10:46 -- scripts/common.sh@393 -- # pt= 00:04:44.487 07:10:46 -- scripts/common.sh@394 -- # return 1 00:04:44.487 07:10:46 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:44.487 1+0 records in 00:04:44.487 1+0 records out 00:04:44.487 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00454463 s, 231 MB/s 00:04:44.487 07:10:46 -- spdk/autotest.sh@129 -- # sync 00:04:44.745 07:10:46 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:44.745 07:10:46 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:44.745 07:10:46 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:46.648 07:10:48 -- spdk/autotest.sh@135 -- # uname -s 00:04:46.648 07:10:48 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:04:46.648 07:10:48 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:46.648 07:10:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:46.648 07:10:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:46.648 07:10:48 -- common/autotest_common.sh@10 -- # set +x 00:04:46.648 ************************************ 00:04:46.648 START TEST setup.sh 00:04:46.648 ************************************ 00:04:46.648 07:10:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:46.907 * Looking for test storage... 00:04:46.907 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:46.907 07:10:48 -- setup/test-setup.sh@10 -- # uname -s 00:04:46.907 07:10:48 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:46.907 07:10:48 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:46.907 07:10:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:46.907 07:10:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:46.907 07:10:48 -- common/autotest_common.sh@10 -- # set +x 00:04:46.907 ************************************ 00:04:46.907 START TEST acl 00:04:46.907 ************************************ 00:04:46.907 07:10:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:46.907 * Looking for test storage... 00:04:46.907 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:46.907 07:10:48 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:46.907 07:10:48 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:46.907 07:10:48 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:46.907 07:10:48 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:46.907 07:10:48 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:46.907 07:10:48 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:46.907 07:10:48 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:46.907 07:10:48 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:46.907 07:10:48 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:46.907 07:10:48 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:46.907 07:10:48 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:04:46.907 07:10:48 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:04:46.907 07:10:48 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:46.907 07:10:48 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:46.907 07:10:48 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:46.907 07:10:48 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:04:46.907 07:10:48 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:04:46.907 07:10:48 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:46.907 07:10:48 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:46.907 07:10:48 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:46.907 07:10:48 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:04:46.907 07:10:48 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:04:46.907 07:10:48 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:46.907 07:10:48 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:46.907 07:10:48 -- setup/acl.sh@12 -- # devs=() 00:04:46.907 07:10:48 -- setup/acl.sh@12 -- # declare -a devs 00:04:46.907 07:10:48 -- setup/acl.sh@13 -- # drivers=() 00:04:46.907 07:10:48 -- setup/acl.sh@13 -- # declare -A drivers 00:04:46.907 07:10:48 -- setup/acl.sh@51 -- # setup reset 00:04:46.907 07:10:48 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:46.907 07:10:48 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:47.843 07:10:49 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:47.843 07:10:49 -- setup/acl.sh@16 -- # local dev driver 00:04:47.843 07:10:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:47.843 07:10:49 -- setup/acl.sh@15 -- # setup output status 00:04:47.843 07:10:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.843 07:10:49 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:47.843 Hugepages 00:04:47.843 node hugesize free / total 00:04:47.843 07:10:49 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:47.843 07:10:49 -- setup/acl.sh@19 -- # continue 00:04:47.843 07:10:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:47.843 00:04:47.843 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:47.843 07:10:49 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:47.843 07:10:49 -- setup/acl.sh@19 -- # continue 00:04:47.843 07:10:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:47.843 07:10:49 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:47.843 07:10:49 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:47.843 07:10:49 -- setup/acl.sh@20 -- # continue 00:04:47.843 07:10:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:48.101 07:10:49 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:48.102 07:10:49 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:48.102 07:10:49 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:48.102 07:10:49 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:48.102 07:10:49 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:48.102 07:10:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:48.102 07:10:49 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:04:48.102 07:10:49 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:48.102 07:10:49 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:48.102 07:10:49 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:48.102 07:10:49 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:48.102 07:10:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:48.102 07:10:49 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:48.102 07:10:49 -- setup/acl.sh@54 -- # run_test denied denied 00:04:48.102 07:10:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:48.102 07:10:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:48.102 07:10:49 -- common/autotest_common.sh@10 -- # set +x 00:04:48.102 ************************************ 00:04:48.102 START TEST denied 00:04:48.102 ************************************ 00:04:48.102 07:10:49 -- common/autotest_common.sh@1104 -- # denied 00:04:48.102 07:10:49 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:48.102 07:10:49 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:48.102 07:10:49 -- setup/acl.sh@38 -- # setup output config 00:04:48.102 07:10:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.102 07:10:49 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:49.039 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:49.039 07:10:50 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:49.039 07:10:50 -- setup/acl.sh@28 -- # local dev driver 00:04:49.039 07:10:50 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:49.039 07:10:50 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:49.039 07:10:50 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:49.039 07:10:50 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:49.039 07:10:50 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:49.039 07:10:50 -- setup/acl.sh@41 -- # setup reset 00:04:49.039 07:10:50 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:49.039 07:10:50 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:49.606 00:04:49.606 real 0m1.557s 00:04:49.606 user 0m0.629s 00:04:49.606 sys 0m0.886s 00:04:49.606 07:10:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.606 ************************************ 00:04:49.606 END TEST denied 00:04:49.606 ************************************ 00:04:49.606 07:10:51 -- common/autotest_common.sh@10 -- # set +x 00:04:49.606 07:10:51 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:49.606 07:10:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:49.606 07:10:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:49.606 07:10:51 -- common/autotest_common.sh@10 -- # set +x 00:04:49.606 ************************************ 00:04:49.606 START TEST allowed 00:04:49.606 ************************************ 00:04:49.606 07:10:51 -- common/autotest_common.sh@1104 -- # allowed 00:04:49.606 07:10:51 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:49.606 07:10:51 -- setup/acl.sh@45 -- # setup output config 00:04:49.606 07:10:51 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:49.606 07:10:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.606 07:10:51 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:50.542 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:50.542 07:10:52 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:04:50.542 07:10:52 -- setup/acl.sh@28 -- # local dev driver 00:04:50.542 07:10:52 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:50.542 07:10:52 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:04:50.542 07:10:52 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:04:50.542 07:10:52 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:50.542 07:10:52 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:50.542 07:10:52 -- setup/acl.sh@48 -- # setup reset 00:04:50.542 07:10:52 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:50.542 07:10:52 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:51.479 00:04:51.480 real 0m1.623s 00:04:51.480 user 0m0.748s 00:04:51.480 sys 0m0.882s 00:04:51.480 07:10:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.480 07:10:53 -- common/autotest_common.sh@10 -- # set +x 00:04:51.480 ************************************ 00:04:51.480 END TEST allowed 00:04:51.480 ************************************ 00:04:51.480 00:04:51.480 real 0m4.543s 00:04:51.480 user 0m1.971s 00:04:51.480 sys 0m2.559s 00:04:51.480 07:10:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.480 07:10:53 -- common/autotest_common.sh@10 -- # set +x 00:04:51.480 ************************************ 00:04:51.480 END TEST acl 00:04:51.480 ************************************ 00:04:51.480 07:10:53 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:51.480 07:10:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:51.480 07:10:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:51.480 07:10:53 -- common/autotest_common.sh@10 -- # set +x 00:04:51.480 ************************************ 00:04:51.480 START TEST hugepages 00:04:51.480 ************************************ 00:04:51.480 07:10:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:51.480 * Looking for test storage... 00:04:51.480 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:51.480 07:10:53 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:51.480 07:10:53 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:51.480 07:10:53 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:51.480 07:10:53 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:51.480 07:10:53 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:51.480 07:10:53 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:51.480 07:10:53 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:51.480 07:10:53 -- setup/common.sh@18 -- # local node= 00:04:51.480 07:10:53 -- setup/common.sh@19 -- # local var val 00:04:51.480 07:10:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.480 07:10:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.480 07:10:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.480 07:10:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.480 07:10:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.480 07:10:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 07:10:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 4413296 kB' 'MemAvailable: 7336664 kB' 'Buffers: 3696 kB' 'Cached: 3123016 kB' 'SwapCached: 0 kB' 'Active: 481928 kB' 'Inactive: 2747016 kB' 'Active(anon): 112744 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2747016 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 320 kB' 'Writeback: 0 kB' 'AnonPages: 104164 kB' 'Mapped: 50900 kB' 'Shmem: 10512 kB' 'KReclaimable: 88636 kB' 'Slab: 191156 kB' 'SReclaimable: 88636 kB' 'SUnreclaim: 102520 kB' 'KernelStack: 6748 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411008 kB' 'Committed_AS: 297704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.480 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.480 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # continue 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.481 07:10:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.481 07:10:53 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.481 07:10:53 -- setup/common.sh@33 -- # echo 2048 00:04:51.481 07:10:53 -- setup/common.sh@33 -- # return 0 00:04:51.481 07:10:53 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:51.481 07:10:53 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:51.481 07:10:53 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:51.481 07:10:53 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:51.481 07:10:53 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:51.481 07:10:53 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:51.481 07:10:53 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:51.481 07:10:53 -- setup/hugepages.sh@207 -- # get_nodes 00:04:51.481 07:10:53 -- setup/hugepages.sh@27 -- # local node 00:04:51.481 07:10:53 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:51.481 07:10:53 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:51.481 07:10:53 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:51.481 07:10:53 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:51.481 07:10:53 -- setup/hugepages.sh@208 -- # clear_hp 00:04:51.481 07:10:53 -- setup/hugepages.sh@37 -- # local node hp 00:04:51.481 07:10:53 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:51.481 07:10:53 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:51.481 07:10:53 -- setup/hugepages.sh@41 -- # echo 0 00:04:51.481 07:10:53 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:51.481 07:10:53 -- setup/hugepages.sh@41 -- # echo 0 00:04:51.481 07:10:53 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:51.481 07:10:53 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:51.481 07:10:53 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:51.481 07:10:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:51.481 07:10:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:51.481 07:10:53 -- common/autotest_common.sh@10 -- # set +x 00:04:51.481 ************************************ 00:04:51.481 START TEST default_setup 00:04:51.481 ************************************ 00:04:51.481 07:10:53 -- common/autotest_common.sh@1104 -- # default_setup 00:04:51.481 07:10:53 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:51.481 07:10:53 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:51.481 07:10:53 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:51.481 07:10:53 -- setup/hugepages.sh@51 -- # shift 00:04:51.481 07:10:53 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:51.481 07:10:53 -- setup/hugepages.sh@52 -- # local node_ids 00:04:51.481 07:10:53 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:51.481 07:10:53 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:51.481 07:10:53 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:51.481 07:10:53 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:51.481 07:10:53 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:51.481 07:10:53 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:51.481 07:10:53 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:51.481 07:10:53 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:51.481 07:10:53 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:51.481 07:10:53 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:51.481 07:10:53 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:51.481 07:10:53 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:51.481 07:10:53 -- setup/hugepages.sh@73 -- # return 0 00:04:51.481 07:10:53 -- setup/hugepages.sh@137 -- # setup output 00:04:51.481 07:10:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.481 07:10:53 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:52.419 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:52.419 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:52.419 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:52.419 07:10:54 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:52.419 07:10:54 -- setup/hugepages.sh@89 -- # local node 00:04:52.419 07:10:54 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:52.419 07:10:54 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:52.419 07:10:54 -- setup/hugepages.sh@92 -- # local surp 00:04:52.419 07:10:54 -- setup/hugepages.sh@93 -- # local resv 00:04:52.419 07:10:54 -- setup/hugepages.sh@94 -- # local anon 00:04:52.419 07:10:54 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:52.419 07:10:54 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:52.419 07:10:54 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:52.419 07:10:54 -- setup/common.sh@18 -- # local node= 00:04:52.419 07:10:54 -- setup/common.sh@19 -- # local var val 00:04:52.419 07:10:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.419 07:10:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.419 07:10:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.419 07:10:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.419 07:10:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.419 07:10:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 07:10:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6452052 kB' 'MemAvailable: 9375220 kB' 'Buffers: 3696 kB' 'Cached: 3123008 kB' 'SwapCached: 0 kB' 'Active: 498212 kB' 'Inactive: 2747016 kB' 'Active(anon): 129028 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2747016 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119884 kB' 'Mapped: 51024 kB' 'Shmem: 10488 kB' 'KReclaimable: 88236 kB' 'Slab: 190792 kB' 'SReclaimable: 88236 kB' 'SUnreclaim: 102556 kB' 'KernelStack: 6688 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 314020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.419 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.419 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.420 07:10:54 -- setup/common.sh@33 -- # echo 0 00:04:52.420 07:10:54 -- setup/common.sh@33 -- # return 0 00:04:52.420 07:10:54 -- setup/hugepages.sh@97 -- # anon=0 00:04:52.420 07:10:54 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:52.420 07:10:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.420 07:10:54 -- setup/common.sh@18 -- # local node= 00:04:52.420 07:10:54 -- setup/common.sh@19 -- # local var val 00:04:52.420 07:10:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.420 07:10:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.420 07:10:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.420 07:10:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.420 07:10:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.420 07:10:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 07:10:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6451804 kB' 'MemAvailable: 9374988 kB' 'Buffers: 3696 kB' 'Cached: 3123008 kB' 'SwapCached: 0 kB' 'Active: 497752 kB' 'Inactive: 2747032 kB' 'Active(anon): 128568 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2747032 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119712 kB' 'Mapped: 51024 kB' 'Shmem: 10488 kB' 'KReclaimable: 88236 kB' 'Slab: 190792 kB' 'SReclaimable: 88236 kB' 'SUnreclaim: 102556 kB' 'KernelStack: 6656 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 313820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 07:10:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 07:10:54 -- setup/common.sh@33 -- # echo 0 00:04:52.421 07:10:54 -- setup/common.sh@33 -- # return 0 00:04:52.421 07:10:54 -- setup/hugepages.sh@99 -- # surp=0 00:04:52.421 07:10:54 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:52.682 07:10:54 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:52.682 07:10:54 -- setup/common.sh@18 -- # local node= 00:04:52.682 07:10:54 -- setup/common.sh@19 -- # local var val 00:04:52.682 07:10:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.682 07:10:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.683 07:10:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.683 07:10:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.683 07:10:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.683 07:10:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.683 07:10:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6451552 kB' 'MemAvailable: 9374736 kB' 'Buffers: 3696 kB' 'Cached: 3123008 kB' 'SwapCached: 0 kB' 'Active: 497856 kB' 'Inactive: 2747032 kB' 'Active(anon): 128672 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2747032 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119824 kB' 'Mapped: 50900 kB' 'Shmem: 10488 kB' 'KReclaimable: 88236 kB' 'Slab: 190776 kB' 'SReclaimable: 88236 kB' 'SUnreclaim: 102540 kB' 'KernelStack: 6704 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 314020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55400 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.683 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.683 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.684 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.684 07:10:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.684 07:10:54 -- setup/common.sh@33 -- # echo 0 00:04:52.684 07:10:54 -- setup/common.sh@33 -- # return 0 00:04:52.684 nr_hugepages=1024 00:04:52.684 07:10:54 -- setup/hugepages.sh@100 -- # resv=0 00:04:52.684 07:10:54 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:52.684 resv_hugepages=0 00:04:52.684 07:10:54 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:52.684 surplus_hugepages=0 00:04:52.684 anon_hugepages=0 00:04:52.684 07:10:54 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:52.684 07:10:54 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:52.685 07:10:54 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.685 07:10:54 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:52.685 07:10:54 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:52.685 07:10:54 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:52.685 07:10:54 -- setup/common.sh@18 -- # local node= 00:04:52.685 07:10:54 -- setup/common.sh@19 -- # local var val 00:04:52.685 07:10:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.685 07:10:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.685 07:10:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.685 07:10:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.685 07:10:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.685 07:10:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.685 07:10:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6451300 kB' 'MemAvailable: 9374484 kB' 'Buffers: 3696 kB' 'Cached: 3123008 kB' 'SwapCached: 0 kB' 'Active: 497776 kB' 'Inactive: 2747032 kB' 'Active(anon): 128592 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2747032 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119732 kB' 'Mapped: 50900 kB' 'Shmem: 10488 kB' 'KReclaimable: 88236 kB' 'Slab: 190764 kB' 'SReclaimable: 88236 kB' 'SUnreclaim: 102528 kB' 'KernelStack: 6704 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 314020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55400 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.685 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.685 07:10:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.686 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.686 07:10:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.686 07:10:54 -- setup/common.sh@33 -- # echo 1024 00:04:52.686 07:10:54 -- setup/common.sh@33 -- # return 0 00:04:52.686 07:10:54 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.686 07:10:54 -- setup/hugepages.sh@112 -- # get_nodes 00:04:52.687 07:10:54 -- setup/hugepages.sh@27 -- # local node 00:04:52.687 07:10:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.687 07:10:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:52.687 07:10:54 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:52.687 07:10:54 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:52.687 07:10:54 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:52.687 07:10:54 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:52.687 07:10:54 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:52.687 07:10:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.687 07:10:54 -- setup/common.sh@18 -- # local node=0 00:04:52.687 07:10:54 -- setup/common.sh@19 -- # local var val 00:04:52.687 07:10:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.687 07:10:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.687 07:10:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:52.687 07:10:54 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:52.687 07:10:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.687 07:10:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.687 07:10:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6451300 kB' 'MemUsed: 5787816 kB' 'SwapCached: 0 kB' 'Active: 497816 kB' 'Inactive: 2747032 kB' 'Active(anon): 128632 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2747032 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'FilePages: 3126704 kB' 'Mapped: 50900 kB' 'AnonPages: 119732 kB' 'Shmem: 10488 kB' 'KernelStack: 6704 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88236 kB' 'Slab: 190764 kB' 'SReclaimable: 88236 kB' 'SUnreclaim: 102528 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.687 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.687 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.688 07:10:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.688 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.688 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.688 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.688 07:10:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.688 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.688 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.688 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.688 07:10:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.688 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.688 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.688 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.688 07:10:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.688 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.688 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.688 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.688 07:10:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.688 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.688 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.688 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.688 07:10:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.688 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.688 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.688 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.688 07:10:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.688 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.688 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.688 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.688 07:10:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.688 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.688 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.688 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.688 07:10:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.688 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.688 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.688 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.688 07:10:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.688 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.688 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.688 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.688 07:10:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.688 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.688 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.688 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.688 07:10:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.688 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.688 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.688 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.688 07:10:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.688 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.688 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.688 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.688 07:10:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.688 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.688 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.688 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.688 07:10:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.688 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.688 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.688 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.688 07:10:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.688 07:10:54 -- setup/common.sh@32 -- # continue 00:04:52.688 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.688 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.688 07:10:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.688 07:10:54 -- setup/common.sh@33 -- # echo 0 00:04:52.688 07:10:54 -- setup/common.sh@33 -- # return 0 00:04:52.688 07:10:54 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:52.688 07:10:54 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:52.688 07:10:54 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:52.688 node0=1024 expecting 1024 00:04:52.688 ************************************ 00:04:52.688 END TEST default_setup 00:04:52.688 ************************************ 00:04:52.688 07:10:54 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:52.688 07:10:54 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:52.688 07:10:54 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:52.688 00:04:52.688 real 0m1.071s 00:04:52.688 user 0m0.497s 00:04:52.688 sys 0m0.492s 00:04:52.688 07:10:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.688 07:10:54 -- common/autotest_common.sh@10 -- # set +x 00:04:52.688 07:10:54 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:52.688 07:10:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:52.688 07:10:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:52.688 07:10:54 -- common/autotest_common.sh@10 -- # set +x 00:04:52.688 ************************************ 00:04:52.688 START TEST per_node_1G_alloc 00:04:52.688 ************************************ 00:04:52.688 07:10:54 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:04:52.688 07:10:54 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:52.688 07:10:54 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:52.688 07:10:54 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:52.688 07:10:54 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:52.688 07:10:54 -- setup/hugepages.sh@51 -- # shift 00:04:52.688 07:10:54 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:52.688 07:10:54 -- setup/hugepages.sh@52 -- # local node_ids 00:04:52.688 07:10:54 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:52.688 07:10:54 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:52.688 07:10:54 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:52.688 07:10:54 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:52.688 07:10:54 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:52.688 07:10:54 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:52.688 07:10:54 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:52.688 07:10:54 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:52.688 07:10:54 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:52.688 07:10:54 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:52.688 07:10:54 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:52.688 07:10:54 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:52.688 07:10:54 -- setup/hugepages.sh@73 -- # return 0 00:04:52.688 07:10:54 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:52.688 07:10:54 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:52.688 07:10:54 -- setup/hugepages.sh@146 -- # setup output 00:04:52.688 07:10:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.688 07:10:54 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:52.947 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:53.209 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:53.209 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:53.209 07:10:54 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:53.209 07:10:54 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:53.209 07:10:54 -- setup/hugepages.sh@89 -- # local node 00:04:53.209 07:10:54 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:53.209 07:10:54 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:53.209 07:10:54 -- setup/hugepages.sh@92 -- # local surp 00:04:53.209 07:10:54 -- setup/hugepages.sh@93 -- # local resv 00:04:53.209 07:10:54 -- setup/hugepages.sh@94 -- # local anon 00:04:53.209 07:10:54 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:53.209 07:10:54 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:53.209 07:10:54 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:53.209 07:10:54 -- setup/common.sh@18 -- # local node= 00:04:53.209 07:10:54 -- setup/common.sh@19 -- # local var val 00:04:53.209 07:10:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.209 07:10:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.209 07:10:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.210 07:10:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.210 07:10:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.210 07:10:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.210 07:10:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7503712 kB' 'MemAvailable: 10426900 kB' 'Buffers: 3696 kB' 'Cached: 3123012 kB' 'SwapCached: 0 kB' 'Active: 498168 kB' 'Inactive: 2747036 kB' 'Active(anon): 128984 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2747036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119796 kB' 'Mapped: 51008 kB' 'Shmem: 10488 kB' 'KReclaimable: 88236 kB' 'Slab: 190812 kB' 'SReclaimable: 88236 kB' 'SUnreclaim: 102576 kB' 'KernelStack: 6736 kB' 'PageTables: 4504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 314020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.210 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.210 07:10:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.211 07:10:54 -- setup/common.sh@33 -- # echo 0 00:04:53.211 07:10:54 -- setup/common.sh@33 -- # return 0 00:04:53.211 07:10:54 -- setup/hugepages.sh@97 -- # anon=0 00:04:53.211 07:10:54 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:53.211 07:10:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.211 07:10:54 -- setup/common.sh@18 -- # local node= 00:04:53.211 07:10:54 -- setup/common.sh@19 -- # local var val 00:04:53.211 07:10:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.211 07:10:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.211 07:10:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.211 07:10:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.211 07:10:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.211 07:10:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.211 07:10:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7503972 kB' 'MemAvailable: 10427160 kB' 'Buffers: 3696 kB' 'Cached: 3123012 kB' 'SwapCached: 0 kB' 'Active: 498012 kB' 'Inactive: 2747036 kB' 'Active(anon): 128828 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2747036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119916 kB' 'Mapped: 50900 kB' 'Shmem: 10488 kB' 'KReclaimable: 88236 kB' 'Slab: 190820 kB' 'SReclaimable: 88236 kB' 'SUnreclaim: 102584 kB' 'KernelStack: 6736 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 314020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.211 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.211 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.212 07:10:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.212 07:10:54 -- setup/common.sh@33 -- # echo 0 00:04:53.212 07:10:54 -- setup/common.sh@33 -- # return 0 00:04:53.212 07:10:54 -- setup/hugepages.sh@99 -- # surp=0 00:04:53.212 07:10:54 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:53.212 07:10:54 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:53.212 07:10:54 -- setup/common.sh@18 -- # local node= 00:04:53.212 07:10:54 -- setup/common.sh@19 -- # local var val 00:04:53.212 07:10:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.212 07:10:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.212 07:10:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.212 07:10:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.212 07:10:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.212 07:10:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.212 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.213 07:10:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7504008 kB' 'MemAvailable: 10427196 kB' 'Buffers: 3696 kB' 'Cached: 3123012 kB' 'SwapCached: 0 kB' 'Active: 497888 kB' 'Inactive: 2747036 kB' 'Active(anon): 128704 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2747036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119784 kB' 'Mapped: 50900 kB' 'Shmem: 10488 kB' 'KReclaimable: 88236 kB' 'Slab: 190800 kB' 'SReclaimable: 88236 kB' 'SUnreclaim: 102564 kB' 'KernelStack: 6704 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 314020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.213 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.213 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.214 07:10:54 -- setup/common.sh@33 -- # echo 0 00:04:53.214 07:10:54 -- setup/common.sh@33 -- # return 0 00:04:53.214 nr_hugepages=512 00:04:53.214 resv_hugepages=0 00:04:53.214 surplus_hugepages=0 00:04:53.214 anon_hugepages=0 00:04:53.214 07:10:54 -- setup/hugepages.sh@100 -- # resv=0 00:04:53.214 07:10:54 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:53.214 07:10:54 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:53.214 07:10:54 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:53.214 07:10:54 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:53.214 07:10:54 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:53.214 07:10:54 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:53.214 07:10:54 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:53.214 07:10:54 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:53.214 07:10:54 -- setup/common.sh@18 -- # local node= 00:04:53.214 07:10:54 -- setup/common.sh@19 -- # local var val 00:04:53.214 07:10:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.214 07:10:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.214 07:10:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.214 07:10:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.214 07:10:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.214 07:10:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.214 07:10:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7504008 kB' 'MemAvailable: 10427196 kB' 'Buffers: 3696 kB' 'Cached: 3123012 kB' 'SwapCached: 0 kB' 'Active: 497968 kB' 'Inactive: 2747036 kB' 'Active(anon): 128784 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2747036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119868 kB' 'Mapped: 50900 kB' 'Shmem: 10488 kB' 'KReclaimable: 88236 kB' 'Slab: 190800 kB' 'SReclaimable: 88236 kB' 'SUnreclaim: 102564 kB' 'KernelStack: 6720 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 314020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.214 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.214 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.215 07:10:54 -- setup/common.sh@32 -- # continue 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.215 07:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.215 07:10:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.215 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.215 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.215 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.215 07:10:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.215 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.215 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.215 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.215 07:10:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.215 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.215 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.215 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.215 07:10:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.215 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.215 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.215 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.215 07:10:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.215 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.215 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.215 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.215 07:10:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.215 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.215 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.215 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.215 07:10:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.215 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.215 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.215 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.215 07:10:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.215 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.215 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.215 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.215 07:10:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.215 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.215 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.215 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.215 07:10:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.215 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.215 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.215 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.215 07:10:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.215 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.215 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.215 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.216 07:10:55 -- setup/common.sh@33 -- # echo 512 00:04:53.216 07:10:55 -- setup/common.sh@33 -- # return 0 00:04:53.216 07:10:55 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:53.216 07:10:55 -- setup/hugepages.sh@112 -- # get_nodes 00:04:53.216 07:10:55 -- setup/hugepages.sh@27 -- # local node 00:04:53.216 07:10:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:53.216 07:10:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:53.216 07:10:55 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:53.216 07:10:55 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:53.216 07:10:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:53.216 07:10:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:53.216 07:10:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:53.216 07:10:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.216 07:10:55 -- setup/common.sh@18 -- # local node=0 00:04:53.216 07:10:55 -- setup/common.sh@19 -- # local var val 00:04:53.216 07:10:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.216 07:10:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.216 07:10:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:53.216 07:10:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:53.216 07:10:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.216 07:10:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 07:10:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7504296 kB' 'MemUsed: 4734820 kB' 'SwapCached: 0 kB' 'Active: 497860 kB' 'Inactive: 2747036 kB' 'Active(anon): 128676 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2747036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'FilePages: 3126708 kB' 'Mapped: 50900 kB' 'AnonPages: 119760 kB' 'Shmem: 10488 kB' 'KernelStack: 6704 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88236 kB' 'Slab: 190796 kB' 'SReclaimable: 88236 kB' 'SUnreclaim: 102560 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.216 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.216 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.217 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.217 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.217 07:10:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.217 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.217 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.217 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.217 07:10:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.217 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.217 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.217 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.217 07:10:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.217 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.217 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.217 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.217 07:10:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.217 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.217 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.217 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.475 07:10:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.475 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.475 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.476 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.476 07:10:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.476 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.476 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.476 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.476 07:10:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.476 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.476 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.476 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.476 07:10:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.476 07:10:55 -- setup/common.sh@33 -- # echo 0 00:04:53.476 07:10:55 -- setup/common.sh@33 -- # return 0 00:04:53.476 node0=512 expecting 512 00:04:53.476 ************************************ 00:04:53.476 END TEST per_node_1G_alloc 00:04:53.476 ************************************ 00:04:53.476 07:10:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:53.476 07:10:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:53.476 07:10:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:53.476 07:10:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:53.476 07:10:55 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:53.476 07:10:55 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:53.476 00:04:53.476 real 0m0.621s 00:04:53.476 user 0m0.285s 00:04:53.476 sys 0m0.339s 00:04:53.476 07:10:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.476 07:10:55 -- common/autotest_common.sh@10 -- # set +x 00:04:53.476 07:10:55 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:53.476 07:10:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:53.476 07:10:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:53.476 07:10:55 -- common/autotest_common.sh@10 -- # set +x 00:04:53.476 ************************************ 00:04:53.476 START TEST even_2G_alloc 00:04:53.476 ************************************ 00:04:53.476 07:10:55 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:04:53.476 07:10:55 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:53.476 07:10:55 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:53.476 07:10:55 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:53.476 07:10:55 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:53.476 07:10:55 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:53.476 07:10:55 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:53.476 07:10:55 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:53.476 07:10:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:53.476 07:10:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:53.476 07:10:55 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:53.476 07:10:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:53.476 07:10:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:53.476 07:10:55 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:53.476 07:10:55 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:53.476 07:10:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:53.476 07:10:55 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:53.476 07:10:55 -- setup/hugepages.sh@83 -- # : 0 00:04:53.476 07:10:55 -- setup/hugepages.sh@84 -- # : 0 00:04:53.476 07:10:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:53.476 07:10:55 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:53.476 07:10:55 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:53.476 07:10:55 -- setup/hugepages.sh@153 -- # setup output 00:04:53.476 07:10:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.476 07:10:55 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:53.736 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:53.736 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:53.736 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:53.736 07:10:55 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:53.736 07:10:55 -- setup/hugepages.sh@89 -- # local node 00:04:53.736 07:10:55 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:53.736 07:10:55 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:53.736 07:10:55 -- setup/hugepages.sh@92 -- # local surp 00:04:53.736 07:10:55 -- setup/hugepages.sh@93 -- # local resv 00:04:53.736 07:10:55 -- setup/hugepages.sh@94 -- # local anon 00:04:53.736 07:10:55 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:53.736 07:10:55 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:53.736 07:10:55 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:53.736 07:10:55 -- setup/common.sh@18 -- # local node= 00:04:53.736 07:10:55 -- setup/common.sh@19 -- # local var val 00:04:53.736 07:10:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.736 07:10:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.736 07:10:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.736 07:10:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.736 07:10:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.736 07:10:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.736 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.736 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.736 07:10:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6472936 kB' 'MemAvailable: 9396124 kB' 'Buffers: 3696 kB' 'Cached: 3123012 kB' 'SwapCached: 0 kB' 'Active: 498424 kB' 'Inactive: 2747036 kB' 'Active(anon): 129240 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2747036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 120120 kB' 'Mapped: 51020 kB' 'Shmem: 10488 kB' 'KReclaimable: 88236 kB' 'Slab: 190836 kB' 'SReclaimable: 88236 kB' 'SUnreclaim: 102600 kB' 'KernelStack: 6712 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 314020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:53.736 07:10:55 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.736 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.736 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.736 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.736 07:10:55 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.736 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.736 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.736 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.736 07:10:55 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.736 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.736 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.736 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.736 07:10:55 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.736 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.736 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.736 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.736 07:10:55 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.736 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.736 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.736 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.736 07:10:55 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.736 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.736 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.736 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.736 07:10:55 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.736 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.736 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.736 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.736 07:10:55 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.736 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.736 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.736 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.736 07:10:55 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.736 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.736 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.736 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.736 07:10:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.736 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.736 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.736 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.736 07:10:55 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.736 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.736 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.736 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.736 07:10:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.736 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.736 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.736 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.736 07:10:55 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.736 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.736 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.736 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.736 07:10:55 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.736 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.736 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.736 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.736 07:10:55 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.736 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.736 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.736 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.736 07:10:55 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.737 07:10:55 -- setup/common.sh@33 -- # echo 0 00:04:53.737 07:10:55 -- setup/common.sh@33 -- # return 0 00:04:53.737 07:10:55 -- setup/hugepages.sh@97 -- # anon=0 00:04:53.737 07:10:55 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:53.737 07:10:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.737 07:10:55 -- setup/common.sh@18 -- # local node= 00:04:53.737 07:10:55 -- setup/common.sh@19 -- # local var val 00:04:53.737 07:10:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.737 07:10:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.737 07:10:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.737 07:10:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.737 07:10:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.737 07:10:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.737 07:10:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6472684 kB' 'MemAvailable: 9395872 kB' 'Buffers: 3696 kB' 'Cached: 3123012 kB' 'SwapCached: 0 kB' 'Active: 498120 kB' 'Inactive: 2747036 kB' 'Active(anon): 128936 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2747036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 120020 kB' 'Mapped: 50900 kB' 'Shmem: 10488 kB' 'KReclaimable: 88236 kB' 'Slab: 190844 kB' 'SReclaimable: 88236 kB' 'SUnreclaim: 102608 kB' 'KernelStack: 6704 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 314020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.737 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.737 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.999 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.999 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.999 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.999 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.999 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.999 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.999 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.999 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.999 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.999 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.999 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.999 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.999 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.999 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.999 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.999 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.999 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.999 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # continue 00:04:53.999 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.999 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 07:10:55 -- setup/common.sh@33 -- # echo 0 00:04:54.000 07:10:55 -- setup/common.sh@33 -- # return 0 00:04:54.000 07:10:55 -- setup/hugepages.sh@99 -- # surp=0 00:04:54.000 07:10:55 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:54.000 07:10:55 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:54.000 07:10:55 -- setup/common.sh@18 -- # local node= 00:04:54.000 07:10:55 -- setup/common.sh@19 -- # local var val 00:04:54.000 07:10:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.000 07:10:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.000 07:10:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.000 07:10:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.000 07:10:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.000 07:10:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 07:10:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6472684 kB' 'MemAvailable: 9395872 kB' 'Buffers: 3696 kB' 'Cached: 3123012 kB' 'SwapCached: 0 kB' 'Active: 497928 kB' 'Inactive: 2747036 kB' 'Active(anon): 128744 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2747036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119608 kB' 'Mapped: 51160 kB' 'Shmem: 10488 kB' 'KReclaimable: 88236 kB' 'Slab: 190844 kB' 'SReclaimable: 88236 kB' 'SUnreclaim: 102608 kB' 'KernelStack: 6720 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 315960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.000 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.000 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.001 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.001 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.002 07:10:55 -- setup/common.sh@33 -- # echo 0 00:04:54.002 07:10:55 -- setup/common.sh@33 -- # return 0 00:04:54.002 07:10:55 -- setup/hugepages.sh@100 -- # resv=0 00:04:54.002 07:10:55 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:54.002 nr_hugepages=1024 00:04:54.002 07:10:55 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:54.002 resv_hugepages=0 00:04:54.002 07:10:55 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:54.002 surplus_hugepages=0 00:04:54.002 07:10:55 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:54.002 anon_hugepages=0 00:04:54.002 07:10:55 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:54.002 07:10:55 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:54.002 07:10:55 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:54.002 07:10:55 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:54.002 07:10:55 -- setup/common.sh@18 -- # local node= 00:04:54.002 07:10:55 -- setup/common.sh@19 -- # local var val 00:04:54.002 07:10:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.002 07:10:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.002 07:10:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.002 07:10:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.002 07:10:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.002 07:10:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.002 07:10:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6473008 kB' 'MemAvailable: 9396196 kB' 'Buffers: 3696 kB' 'Cached: 3123012 kB' 'SwapCached: 0 kB' 'Active: 498028 kB' 'Inactive: 2747036 kB' 'Active(anon): 128844 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2747036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119952 kB' 'Mapped: 50948 kB' 'Shmem: 10488 kB' 'KReclaimable: 88236 kB' 'Slab: 190832 kB' 'SReclaimable: 88236 kB' 'SUnreclaim: 102596 kB' 'KernelStack: 6656 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 314020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.002 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.002 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.003 07:10:55 -- setup/common.sh@33 -- # echo 1024 00:04:54.003 07:10:55 -- setup/common.sh@33 -- # return 0 00:04:54.003 07:10:55 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:54.003 07:10:55 -- setup/hugepages.sh@112 -- # get_nodes 00:04:54.003 07:10:55 -- setup/hugepages.sh@27 -- # local node 00:04:54.003 07:10:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.003 07:10:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:54.003 07:10:55 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:54.003 07:10:55 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:54.003 07:10:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:54.003 07:10:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:54.003 07:10:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:54.003 07:10:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.003 07:10:55 -- setup/common.sh@18 -- # local node=0 00:04:54.003 07:10:55 -- setup/common.sh@19 -- # local var val 00:04:54.003 07:10:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.003 07:10:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.003 07:10:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:54.003 07:10:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:54.003 07:10:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.003 07:10:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.003 07:10:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6473008 kB' 'MemUsed: 5766108 kB' 'SwapCached: 0 kB' 'Active: 497604 kB' 'Inactive: 2747036 kB' 'Active(anon): 128420 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2747036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'FilePages: 3126708 kB' 'Mapped: 50948 kB' 'AnonPages: 119792 kB' 'Shmem: 10488 kB' 'KernelStack: 6704 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88236 kB' 'Slab: 190832 kB' 'SReclaimable: 88236 kB' 'SUnreclaim: 102596 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.003 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.003 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # continue 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.004 07:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.004 07:10:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.004 07:10:55 -- setup/common.sh@33 -- # echo 0 00:04:54.004 07:10:55 -- setup/common.sh@33 -- # return 0 00:04:54.004 node0=1024 expecting 1024 00:04:54.004 07:10:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:54.004 07:10:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:54.004 07:10:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:54.004 07:10:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:54.004 07:10:55 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:54.004 07:10:55 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:54.004 00:04:54.004 real 0m0.614s 00:04:54.004 user 0m0.301s 00:04:54.004 sys 0m0.327s 00:04:54.004 ************************************ 00:04:54.004 END TEST even_2G_alloc 00:04:54.004 ************************************ 00:04:54.004 07:10:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.004 07:10:55 -- common/autotest_common.sh@10 -- # set +x 00:04:54.004 07:10:55 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:54.004 07:10:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:54.004 07:10:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:54.004 07:10:55 -- common/autotest_common.sh@10 -- # set +x 00:04:54.004 ************************************ 00:04:54.004 START TEST odd_alloc 00:04:54.004 ************************************ 00:04:54.004 07:10:55 -- common/autotest_common.sh@1104 -- # odd_alloc 00:04:54.004 07:10:55 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:54.004 07:10:55 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:54.004 07:10:55 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:54.004 07:10:55 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:54.004 07:10:55 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:54.004 07:10:55 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:54.004 07:10:55 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:54.005 07:10:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:54.005 07:10:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:54.005 07:10:55 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:54.005 07:10:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:54.005 07:10:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:54.005 07:10:55 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:54.005 07:10:55 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:54.005 07:10:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:54.005 07:10:55 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:54.005 07:10:55 -- setup/hugepages.sh@83 -- # : 0 00:04:54.005 07:10:55 -- setup/hugepages.sh@84 -- # : 0 00:04:54.005 07:10:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:54.005 07:10:55 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:54.005 07:10:55 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:54.005 07:10:55 -- setup/hugepages.sh@160 -- # setup output 00:04:54.005 07:10:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.005 07:10:55 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:54.575 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:54.575 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:54.575 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:54.575 07:10:56 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:54.575 07:10:56 -- setup/hugepages.sh@89 -- # local node 00:04:54.575 07:10:56 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:54.575 07:10:56 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:54.575 07:10:56 -- setup/hugepages.sh@92 -- # local surp 00:04:54.575 07:10:56 -- setup/hugepages.sh@93 -- # local resv 00:04:54.575 07:10:56 -- setup/hugepages.sh@94 -- # local anon 00:04:54.575 07:10:56 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:54.575 07:10:56 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:54.575 07:10:56 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:54.575 07:10:56 -- setup/common.sh@18 -- # local node= 00:04:54.575 07:10:56 -- setup/common.sh@19 -- # local var val 00:04:54.575 07:10:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.575 07:10:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.575 07:10:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.575 07:10:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.575 07:10:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.575 07:10:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.575 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.575 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.575 07:10:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6477520 kB' 'MemAvailable: 9400708 kB' 'Buffers: 3696 kB' 'Cached: 3123012 kB' 'SwapCached: 0 kB' 'Active: 498212 kB' 'Inactive: 2747036 kB' 'Active(anon): 129028 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2747036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120112 kB' 'Mapped: 51064 kB' 'Shmem: 10488 kB' 'KReclaimable: 88236 kB' 'Slab: 190840 kB' 'SReclaimable: 88236 kB' 'SUnreclaim: 102604 kB' 'KernelStack: 6696 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 314020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:54.575 07:10:56 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.575 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.575 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.575 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.575 07:10:56 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.575 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.575 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.575 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.575 07:10:56 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.575 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.575 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.575 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.575 07:10:56 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.575 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.575 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.575 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.575 07:10:56 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.575 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.575 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.575 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.575 07:10:56 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.575 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.575 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.575 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.575 07:10:56 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.575 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.575 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.575 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.575 07:10:56 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.575 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.575 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.575 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.575 07:10:56 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.575 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.575 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.575 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.575 07:10:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.575 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.575 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.575 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.575 07:10:56 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.575 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.575 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.575 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.575 07:10:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.575 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.575 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.575 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.575 07:10:56 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.575 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.575 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.575 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.575 07:10:56 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.575 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.575 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.575 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.575 07:10:56 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.575 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.576 07:10:56 -- setup/common.sh@33 -- # echo 0 00:04:54.576 07:10:56 -- setup/common.sh@33 -- # return 0 00:04:54.576 07:10:56 -- setup/hugepages.sh@97 -- # anon=0 00:04:54.576 07:10:56 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:54.576 07:10:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.576 07:10:56 -- setup/common.sh@18 -- # local node= 00:04:54.576 07:10:56 -- setup/common.sh@19 -- # local var val 00:04:54.576 07:10:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.576 07:10:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.576 07:10:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.576 07:10:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.576 07:10:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.576 07:10:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.576 07:10:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6477556 kB' 'MemAvailable: 9400744 kB' 'Buffers: 3696 kB' 'Cached: 3123012 kB' 'SwapCached: 0 kB' 'Active: 497940 kB' 'Inactive: 2747036 kB' 'Active(anon): 128756 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2747036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119896 kB' 'Mapped: 50948 kB' 'Shmem: 10488 kB' 'KReclaimable: 88236 kB' 'Slab: 190852 kB' 'SReclaimable: 88236 kB' 'SUnreclaim: 102616 kB' 'KernelStack: 6720 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 314020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.576 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.576 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.577 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.577 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.578 07:10:56 -- setup/common.sh@33 -- # echo 0 00:04:54.578 07:10:56 -- setup/common.sh@33 -- # return 0 00:04:54.578 07:10:56 -- setup/hugepages.sh@99 -- # surp=0 00:04:54.578 07:10:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:54.578 07:10:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:54.578 07:10:56 -- setup/common.sh@18 -- # local node= 00:04:54.578 07:10:56 -- setup/common.sh@19 -- # local var val 00:04:54.578 07:10:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.578 07:10:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.578 07:10:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.578 07:10:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.578 07:10:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.578 07:10:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.578 07:10:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6477556 kB' 'MemAvailable: 9400744 kB' 'Buffers: 3696 kB' 'Cached: 3123012 kB' 'SwapCached: 0 kB' 'Active: 498388 kB' 'Inactive: 2747036 kB' 'Active(anon): 129204 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2747036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120072 kB' 'Mapped: 51208 kB' 'Shmem: 10488 kB' 'KReclaimable: 88236 kB' 'Slab: 190852 kB' 'SReclaimable: 88236 kB' 'SUnreclaim: 102616 kB' 'KernelStack: 6768 kB' 'PageTables: 4620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 314020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55384 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.578 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.578 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.579 07:10:56 -- setup/common.sh@33 -- # echo 0 00:04:54.579 07:10:56 -- setup/common.sh@33 -- # return 0 00:04:54.579 07:10:56 -- setup/hugepages.sh@100 -- # resv=0 00:04:54.579 07:10:56 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:54.579 nr_hugepages=1025 00:04:54.579 07:10:56 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:54.579 resv_hugepages=0 00:04:54.579 07:10:56 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:54.579 surplus_hugepages=0 00:04:54.579 07:10:56 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:54.579 anon_hugepages=0 00:04:54.579 07:10:56 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:54.579 07:10:56 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:54.579 07:10:56 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:54.579 07:10:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:54.579 07:10:56 -- setup/common.sh@18 -- # local node= 00:04:54.579 07:10:56 -- setup/common.sh@19 -- # local var val 00:04:54.579 07:10:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.579 07:10:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.579 07:10:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.579 07:10:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.579 07:10:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.579 07:10:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.579 07:10:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6478004 kB' 'MemAvailable: 9401192 kB' 'Buffers: 3696 kB' 'Cached: 3123012 kB' 'SwapCached: 0 kB' 'Active: 497852 kB' 'Inactive: 2747036 kB' 'Active(anon): 128668 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2747036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119788 kB' 'Mapped: 50948 kB' 'Shmem: 10488 kB' 'KReclaimable: 88236 kB' 'Slab: 190848 kB' 'SReclaimable: 88236 kB' 'SUnreclaim: 102612 kB' 'KernelStack: 6704 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 314020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55368 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.579 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.579 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.580 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.580 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.581 07:10:56 -- setup/common.sh@33 -- # echo 1025 00:04:54.581 07:10:56 -- setup/common.sh@33 -- # return 0 00:04:54.581 07:10:56 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:54.581 07:10:56 -- setup/hugepages.sh@112 -- # get_nodes 00:04:54.581 07:10:56 -- setup/hugepages.sh@27 -- # local node 00:04:54.581 07:10:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.581 07:10:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:54.581 07:10:56 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:54.581 07:10:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:54.581 07:10:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:54.581 07:10:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:54.581 07:10:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:54.581 07:10:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.581 07:10:56 -- setup/common.sh@18 -- # local node=0 00:04:54.581 07:10:56 -- setup/common.sh@19 -- # local var val 00:04:54.581 07:10:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.581 07:10:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.581 07:10:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:54.581 07:10:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:54.581 07:10:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.581 07:10:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.581 07:10:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6482428 kB' 'MemUsed: 5756688 kB' 'SwapCached: 0 kB' 'Active: 497600 kB' 'Inactive: 2747036 kB' 'Active(anon): 128416 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2747036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3126708 kB' 'Mapped: 50948 kB' 'AnonPages: 119788 kB' 'Shmem: 10488 kB' 'KernelStack: 6704 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88236 kB' 'Slab: 190824 kB' 'SReclaimable: 88236 kB' 'SUnreclaim: 102588 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.581 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.581 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.582 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.582 07:10:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.582 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.582 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.582 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.582 07:10:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.582 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.582 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.582 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.582 07:10:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.582 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.582 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.582 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.582 07:10:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.582 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.582 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.582 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.582 07:10:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.582 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.582 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.582 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.582 07:10:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.582 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.582 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.582 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.582 07:10:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.582 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.582 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.582 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.582 07:10:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.582 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.582 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.582 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.582 07:10:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.582 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.582 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.582 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.582 07:10:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.582 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.582 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.582 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.582 07:10:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.582 07:10:56 -- setup/common.sh@32 -- # continue 00:04:54.582 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.582 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.582 07:10:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.582 07:10:56 -- setup/common.sh@33 -- # echo 0 00:04:54.582 07:10:56 -- setup/common.sh@33 -- # return 0 00:04:54.582 07:10:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:54.582 07:10:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:54.582 07:10:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:54.582 node0=1025 expecting 1025 00:04:54.582 07:10:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:54.582 07:10:56 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:54.582 07:10:56 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:54.582 00:04:54.582 real 0m0.627s 00:04:54.582 user 0m0.302s 00:04:54.582 sys 0m0.333s 00:04:54.582 07:10:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.582 07:10:56 -- common/autotest_common.sh@10 -- # set +x 00:04:54.582 ************************************ 00:04:54.582 END TEST odd_alloc 00:04:54.582 ************************************ 00:04:54.840 07:10:56 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:54.840 07:10:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:54.840 07:10:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:54.840 07:10:56 -- common/autotest_common.sh@10 -- # set +x 00:04:54.840 ************************************ 00:04:54.840 START TEST custom_alloc 00:04:54.840 ************************************ 00:04:54.840 07:10:56 -- common/autotest_common.sh@1104 -- # custom_alloc 00:04:54.840 07:10:56 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:54.840 07:10:56 -- setup/hugepages.sh@169 -- # local node 00:04:54.840 07:10:56 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:54.840 07:10:56 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:54.840 07:10:56 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:54.840 07:10:56 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:54.840 07:10:56 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:54.840 07:10:56 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:54.840 07:10:56 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:54.840 07:10:56 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:54.840 07:10:56 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:54.840 07:10:56 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:54.840 07:10:56 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:54.840 07:10:56 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:54.840 07:10:56 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:54.840 07:10:56 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:54.840 07:10:56 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:54.840 07:10:56 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:54.840 07:10:56 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:54.840 07:10:56 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:54.840 07:10:56 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:54.840 07:10:56 -- setup/hugepages.sh@83 -- # : 0 00:04:54.840 07:10:56 -- setup/hugepages.sh@84 -- # : 0 00:04:54.840 07:10:56 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:54.840 07:10:56 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:54.840 07:10:56 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:54.840 07:10:56 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:54.840 07:10:56 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:54.840 07:10:56 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:54.840 07:10:56 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:54.840 07:10:56 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:54.840 07:10:56 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:54.840 07:10:56 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:54.840 07:10:56 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:54.840 07:10:56 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:54.840 07:10:56 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:54.840 07:10:56 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:54.840 07:10:56 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:54.840 07:10:56 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:54.840 07:10:56 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:54.840 07:10:56 -- setup/hugepages.sh@78 -- # return 0 00:04:54.840 07:10:56 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:54.840 07:10:56 -- setup/hugepages.sh@187 -- # setup output 00:04:54.840 07:10:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.840 07:10:56 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:55.100 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:55.100 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:55.100 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:55.100 07:10:56 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:55.100 07:10:56 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:55.100 07:10:56 -- setup/hugepages.sh@89 -- # local node 00:04:55.100 07:10:56 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:55.100 07:10:56 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:55.100 07:10:56 -- setup/hugepages.sh@92 -- # local surp 00:04:55.100 07:10:56 -- setup/hugepages.sh@93 -- # local resv 00:04:55.100 07:10:56 -- setup/hugepages.sh@94 -- # local anon 00:04:55.100 07:10:56 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:55.100 07:10:56 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:55.100 07:10:56 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:55.100 07:10:56 -- setup/common.sh@18 -- # local node= 00:04:55.100 07:10:56 -- setup/common.sh@19 -- # local var val 00:04:55.100 07:10:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.100 07:10:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.100 07:10:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.100 07:10:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.100 07:10:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.100 07:10:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.100 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.100 07:10:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7530740 kB' 'MemAvailable: 10453928 kB' 'Buffers: 3696 kB' 'Cached: 3123012 kB' 'SwapCached: 0 kB' 'Active: 498232 kB' 'Inactive: 2747036 kB' 'Active(anon): 129048 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2747036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119868 kB' 'Mapped: 51024 kB' 'Shmem: 10488 kB' 'KReclaimable: 88236 kB' 'Slab: 190832 kB' 'SReclaimable: 88236 kB' 'SUnreclaim: 102596 kB' 'KernelStack: 6696 kB' 'PageTables: 4504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 314020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:55.100 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.100 07:10:56 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.100 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.100 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.100 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.100 07:10:56 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.100 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.100 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.100 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.100 07:10:56 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.100 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.100 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.100 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.100 07:10:56 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.100 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.100 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.100 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.100 07:10:56 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.100 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.100 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.100 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.100 07:10:56 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.100 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.100 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.101 07:10:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.101 07:10:56 -- setup/common.sh@33 -- # echo 0 00:04:55.101 07:10:56 -- setup/common.sh@33 -- # return 0 00:04:55.101 07:10:56 -- setup/hugepages.sh@97 -- # anon=0 00:04:55.101 07:10:56 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:55.101 07:10:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.101 07:10:56 -- setup/common.sh@18 -- # local node= 00:04:55.101 07:10:56 -- setup/common.sh@19 -- # local var val 00:04:55.101 07:10:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.101 07:10:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.101 07:10:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.101 07:10:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.101 07:10:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.101 07:10:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.101 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.102 07:10:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7530740 kB' 'MemAvailable: 10453928 kB' 'Buffers: 3696 kB' 'Cached: 3123012 kB' 'SwapCached: 0 kB' 'Active: 497716 kB' 'Inactive: 2747036 kB' 'Active(anon): 128532 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2747036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119620 kB' 'Mapped: 50900 kB' 'Shmem: 10488 kB' 'KReclaimable: 88236 kB' 'Slab: 190844 kB' 'SReclaimable: 88236 kB' 'SUnreclaim: 102608 kB' 'KernelStack: 6704 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 314020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:55.102 07:10:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.102 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.102 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.102 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.102 07:10:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.102 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.102 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.102 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.102 07:10:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.102 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.102 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.102 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.102 07:10:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.102 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.102 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.102 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.102 07:10:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.102 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.102 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.102 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.102 07:10:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.102 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.102 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.102 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.102 07:10:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.102 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.102 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.102 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.102 07:10:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.102 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.102 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.102 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.102 07:10:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.102 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.102 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.102 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.102 07:10:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.102 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.102 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.102 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.102 07:10:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.102 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.102 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.363 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.363 07:10:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.363 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.363 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.363 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.363 07:10:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.363 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.363 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.363 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.363 07:10:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.363 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.363 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.363 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.363 07:10:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.363 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.363 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.363 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.363 07:10:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.363 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.363 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.363 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.363 07:10:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.363 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.363 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.363 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.363 07:10:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.363 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.363 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.363 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.363 07:10:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.363 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.363 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.363 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.363 07:10:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.363 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.363 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.363 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.363 07:10:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.363 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.363 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.363 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.363 07:10:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.363 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.363 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.363 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.363 07:10:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.363 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.364 07:10:56 -- setup/common.sh@33 -- # echo 0 00:04:55.364 07:10:56 -- setup/common.sh@33 -- # return 0 00:04:55.364 07:10:56 -- setup/hugepages.sh@99 -- # surp=0 00:04:55.364 07:10:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:55.364 07:10:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:55.364 07:10:56 -- setup/common.sh@18 -- # local node= 00:04:55.364 07:10:56 -- setup/common.sh@19 -- # local var val 00:04:55.364 07:10:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.364 07:10:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.364 07:10:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.364 07:10:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.364 07:10:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.364 07:10:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.364 07:10:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7530740 kB' 'MemAvailable: 10453928 kB' 'Buffers: 3696 kB' 'Cached: 3123012 kB' 'SwapCached: 0 kB' 'Active: 497980 kB' 'Inactive: 2747036 kB' 'Active(anon): 128796 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2747036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119876 kB' 'Mapped: 50900 kB' 'Shmem: 10488 kB' 'KReclaimable: 88236 kB' 'Slab: 190844 kB' 'SReclaimable: 88236 kB' 'SUnreclaim: 102608 kB' 'KernelStack: 6704 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 314020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.364 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.364 07:10:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:56 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.365 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.365 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.365 07:10:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.365 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.366 07:10:57 -- setup/common.sh@33 -- # echo 0 00:04:55.366 07:10:57 -- setup/common.sh@33 -- # return 0 00:04:55.366 nr_hugepages=512 00:04:55.366 resv_hugepages=0 00:04:55.366 surplus_hugepages=0 00:04:55.366 anon_hugepages=0 00:04:55.366 07:10:57 -- setup/hugepages.sh@100 -- # resv=0 00:04:55.366 07:10:57 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:55.366 07:10:57 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:55.366 07:10:57 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:55.366 07:10:57 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:55.366 07:10:57 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:55.366 07:10:57 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:55.366 07:10:57 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:55.366 07:10:57 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:55.366 07:10:57 -- setup/common.sh@18 -- # local node= 00:04:55.366 07:10:57 -- setup/common.sh@19 -- # local var val 00:04:55.366 07:10:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.366 07:10:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.366 07:10:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.366 07:10:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.366 07:10:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.366 07:10:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.366 07:10:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7530740 kB' 'MemAvailable: 10453928 kB' 'Buffers: 3696 kB' 'Cached: 3123012 kB' 'SwapCached: 0 kB' 'Active: 497960 kB' 'Inactive: 2747036 kB' 'Active(anon): 128776 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2747036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119904 kB' 'Mapped: 50900 kB' 'Shmem: 10488 kB' 'KReclaimable: 88236 kB' 'Slab: 190852 kB' 'SReclaimable: 88236 kB' 'SUnreclaim: 102616 kB' 'KernelStack: 6720 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 314020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.366 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.366 07:10:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.367 07:10:57 -- setup/common.sh@33 -- # echo 512 00:04:55.367 07:10:57 -- setup/common.sh@33 -- # return 0 00:04:55.367 07:10:57 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:55.367 07:10:57 -- setup/hugepages.sh@112 -- # get_nodes 00:04:55.367 07:10:57 -- setup/hugepages.sh@27 -- # local node 00:04:55.367 07:10:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.367 07:10:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:55.367 07:10:57 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:55.367 07:10:57 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:55.367 07:10:57 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:55.367 07:10:57 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:55.367 07:10:57 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:55.367 07:10:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.367 07:10:57 -- setup/common.sh@18 -- # local node=0 00:04:55.367 07:10:57 -- setup/common.sh@19 -- # local var val 00:04:55.367 07:10:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.367 07:10:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.367 07:10:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:55.367 07:10:57 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:55.367 07:10:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.367 07:10:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.367 07:10:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7538820 kB' 'MemUsed: 4700296 kB' 'SwapCached: 0 kB' 'Active: 497708 kB' 'Inactive: 2747036 kB' 'Active(anon): 128524 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2747036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3126708 kB' 'Mapped: 50900 kB' 'AnonPages: 119916 kB' 'Shmem: 10488 kB' 'KernelStack: 6720 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88236 kB' 'Slab: 190832 kB' 'SReclaimable: 88236 kB' 'SUnreclaim: 102596 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.367 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.367 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.368 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.368 07:10:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.368 07:10:57 -- setup/common.sh@33 -- # echo 0 00:04:55.368 07:10:57 -- setup/common.sh@33 -- # return 0 00:04:55.368 07:10:57 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:55.368 07:10:57 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:55.368 07:10:57 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:55.368 07:10:57 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:55.368 node0=512 expecting 512 00:04:55.368 ************************************ 00:04:55.368 END TEST custom_alloc 00:04:55.368 ************************************ 00:04:55.368 07:10:57 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:55.368 07:10:57 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:55.368 00:04:55.368 real 0m0.626s 00:04:55.368 user 0m0.273s 00:04:55.368 sys 0m0.358s 00:04:55.368 07:10:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.368 07:10:57 -- common/autotest_common.sh@10 -- # set +x 00:04:55.368 07:10:57 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:55.368 07:10:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:55.368 07:10:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:55.368 07:10:57 -- common/autotest_common.sh@10 -- # set +x 00:04:55.368 ************************************ 00:04:55.368 START TEST no_shrink_alloc 00:04:55.368 ************************************ 00:04:55.369 07:10:57 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:04:55.369 07:10:57 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:55.369 07:10:57 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:55.369 07:10:57 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:55.369 07:10:57 -- setup/hugepages.sh@51 -- # shift 00:04:55.369 07:10:57 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:55.369 07:10:57 -- setup/hugepages.sh@52 -- # local node_ids 00:04:55.369 07:10:57 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:55.369 07:10:57 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:55.369 07:10:57 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:55.369 07:10:57 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:55.369 07:10:57 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:55.369 07:10:57 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:55.369 07:10:57 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:55.369 07:10:57 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:55.369 07:10:57 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:55.369 07:10:57 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:55.369 07:10:57 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:55.369 07:10:57 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:55.369 07:10:57 -- setup/hugepages.sh@73 -- # return 0 00:04:55.369 07:10:57 -- setup/hugepages.sh@198 -- # setup output 00:04:55.369 07:10:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.369 07:10:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:55.938 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:55.938 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:55.938 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:55.938 07:10:57 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:55.938 07:10:57 -- setup/hugepages.sh@89 -- # local node 00:04:55.938 07:10:57 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:55.938 07:10:57 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:55.938 07:10:57 -- setup/hugepages.sh@92 -- # local surp 00:04:55.938 07:10:57 -- setup/hugepages.sh@93 -- # local resv 00:04:55.938 07:10:57 -- setup/hugepages.sh@94 -- # local anon 00:04:55.939 07:10:57 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:55.939 07:10:57 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:55.939 07:10:57 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:55.939 07:10:57 -- setup/common.sh@18 -- # local node= 00:04:55.939 07:10:57 -- setup/common.sh@19 -- # local var val 00:04:55.939 07:10:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.939 07:10:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.939 07:10:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.939 07:10:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.939 07:10:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.939 07:10:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.939 07:10:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6486020 kB' 'MemAvailable: 9409208 kB' 'Buffers: 3696 kB' 'Cached: 3123012 kB' 'SwapCached: 0 kB' 'Active: 498256 kB' 'Inactive: 2747036 kB' 'Active(anon): 129072 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2747036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120160 kB' 'Mapped: 51028 kB' 'Shmem: 10488 kB' 'KReclaimable: 88236 kB' 'Slab: 190852 kB' 'SReclaimable: 88236 kB' 'SUnreclaim: 102616 kB' 'KernelStack: 6712 kB' 'PageTables: 4556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 314020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.939 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.939 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.940 07:10:57 -- setup/common.sh@33 -- # echo 0 00:04:55.940 07:10:57 -- setup/common.sh@33 -- # return 0 00:04:55.940 07:10:57 -- setup/hugepages.sh@97 -- # anon=0 00:04:55.940 07:10:57 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:55.940 07:10:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.940 07:10:57 -- setup/common.sh@18 -- # local node= 00:04:55.940 07:10:57 -- setup/common.sh@19 -- # local var val 00:04:55.940 07:10:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.940 07:10:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.940 07:10:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.940 07:10:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.940 07:10:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.940 07:10:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.940 07:10:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6486020 kB' 'MemAvailable: 9409208 kB' 'Buffers: 3696 kB' 'Cached: 3123012 kB' 'SwapCached: 0 kB' 'Active: 497944 kB' 'Inactive: 2747036 kB' 'Active(anon): 128760 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2747036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119844 kB' 'Mapped: 50900 kB' 'Shmem: 10488 kB' 'KReclaimable: 88236 kB' 'Slab: 190860 kB' 'SReclaimable: 88236 kB' 'SUnreclaim: 102624 kB' 'KernelStack: 6704 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 314020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.940 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.940 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.941 07:10:57 -- setup/common.sh@33 -- # echo 0 00:04:55.941 07:10:57 -- setup/common.sh@33 -- # return 0 00:04:55.941 07:10:57 -- setup/hugepages.sh@99 -- # surp=0 00:04:55.941 07:10:57 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:55.941 07:10:57 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:55.941 07:10:57 -- setup/common.sh@18 -- # local node= 00:04:55.941 07:10:57 -- setup/common.sh@19 -- # local var val 00:04:55.941 07:10:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.941 07:10:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.941 07:10:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.941 07:10:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.941 07:10:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.941 07:10:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.941 07:10:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6486020 kB' 'MemAvailable: 9409208 kB' 'Buffers: 3696 kB' 'Cached: 3123012 kB' 'SwapCached: 0 kB' 'Active: 498024 kB' 'Inactive: 2747036 kB' 'Active(anon): 128840 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2747036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119924 kB' 'Mapped: 50900 kB' 'Shmem: 10488 kB' 'KReclaimable: 88236 kB' 'Slab: 190856 kB' 'SReclaimable: 88236 kB' 'SUnreclaim: 102620 kB' 'KernelStack: 6720 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 315960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.941 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.941 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.942 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.942 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.943 07:10:57 -- setup/common.sh@33 -- # echo 0 00:04:55.943 07:10:57 -- setup/common.sh@33 -- # return 0 00:04:55.943 07:10:57 -- setup/hugepages.sh@100 -- # resv=0 00:04:55.943 nr_hugepages=1024 00:04:55.943 07:10:57 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:55.943 resv_hugepages=0 00:04:55.943 07:10:57 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:55.943 surplus_hugepages=0 00:04:55.943 07:10:57 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:55.943 anon_hugepages=0 00:04:55.943 07:10:57 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:55.943 07:10:57 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:55.943 07:10:57 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:55.943 07:10:57 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:55.943 07:10:57 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:55.943 07:10:57 -- setup/common.sh@18 -- # local node= 00:04:55.943 07:10:57 -- setup/common.sh@19 -- # local var val 00:04:55.943 07:10:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.943 07:10:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.943 07:10:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.943 07:10:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.943 07:10:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.943 07:10:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.943 07:10:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6486020 kB' 'MemAvailable: 9409208 kB' 'Buffers: 3696 kB' 'Cached: 3123016 kB' 'SwapCached: 0 kB' 'Active: 495232 kB' 'Inactive: 2747036 kB' 'Active(anon): 126048 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2747036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117132 kB' 'Mapped: 50104 kB' 'Shmem: 10488 kB' 'KReclaimable: 88236 kB' 'Slab: 190788 kB' 'SReclaimable: 88236 kB' 'SUnreclaim: 102552 kB' 'KernelStack: 6576 kB' 'PageTables: 3872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 295612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55336 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.943 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.943 07:10:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.944 07:10:57 -- setup/common.sh@33 -- # echo 1024 00:04:55.944 07:10:57 -- setup/common.sh@33 -- # return 0 00:04:55.944 07:10:57 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:55.944 07:10:57 -- setup/hugepages.sh@112 -- # get_nodes 00:04:55.944 07:10:57 -- setup/hugepages.sh@27 -- # local node 00:04:55.944 07:10:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.944 07:10:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:55.944 07:10:57 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:55.944 07:10:57 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:55.944 07:10:57 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:55.944 07:10:57 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:55.944 07:10:57 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:55.944 07:10:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.944 07:10:57 -- setup/common.sh@18 -- # local node=0 00:04:55.944 07:10:57 -- setup/common.sh@19 -- # local var val 00:04:55.944 07:10:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.944 07:10:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.944 07:10:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:55.944 07:10:57 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:55.944 07:10:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.944 07:10:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.944 07:10:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6486020 kB' 'MemUsed: 5753096 kB' 'SwapCached: 0 kB' 'Active: 494676 kB' 'Inactive: 2747036 kB' 'Active(anon): 125492 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2747036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3126712 kB' 'Mapped: 50052 kB' 'AnonPages: 116668 kB' 'Shmem: 10488 kB' 'KernelStack: 6608 kB' 'PageTables: 3968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88220 kB' 'Slab: 190696 kB' 'SReclaimable: 88220 kB' 'SUnreclaim: 102476 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.944 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.944 07:10:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # continue 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.945 07:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.945 07:10:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.945 07:10:57 -- setup/common.sh@33 -- # echo 0 00:04:55.945 07:10:57 -- setup/common.sh@33 -- # return 0 00:04:55.945 07:10:57 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:55.945 07:10:57 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:55.945 07:10:57 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:55.945 07:10:57 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:55.945 node0=1024 expecting 1024 00:04:55.945 07:10:57 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:55.945 07:10:57 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:55.945 07:10:57 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:55.945 07:10:57 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:55.945 07:10:57 -- setup/hugepages.sh@202 -- # setup output 00:04:55.945 07:10:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.945 07:10:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:56.516 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:56.516 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:56.516 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:56.516 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:56.516 07:10:58 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:56.516 07:10:58 -- setup/hugepages.sh@89 -- # local node 00:04:56.516 07:10:58 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:56.516 07:10:58 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:56.516 07:10:58 -- setup/hugepages.sh@92 -- # local surp 00:04:56.516 07:10:58 -- setup/hugepages.sh@93 -- # local resv 00:04:56.516 07:10:58 -- setup/hugepages.sh@94 -- # local anon 00:04:56.516 07:10:58 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:56.516 07:10:58 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:56.516 07:10:58 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:56.516 07:10:58 -- setup/common.sh@18 -- # local node= 00:04:56.516 07:10:58 -- setup/common.sh@19 -- # local var val 00:04:56.516 07:10:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.516 07:10:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.516 07:10:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.516 07:10:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.516 07:10:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.516 07:10:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.516 07:10:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6494676 kB' 'MemAvailable: 9417860 kB' 'Buffers: 3696 kB' 'Cached: 3123016 kB' 'SwapCached: 0 kB' 'Active: 495348 kB' 'Inactive: 2747040 kB' 'Active(anon): 126164 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2747040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117348 kB' 'Mapped: 50260 kB' 'Shmem: 10488 kB' 'KReclaimable: 88220 kB' 'Slab: 190600 kB' 'SReclaimable: 88220 kB' 'SUnreclaim: 102380 kB' 'KernelStack: 6712 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 295612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55400 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.516 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.516 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.517 07:10:58 -- setup/common.sh@33 -- # echo 0 00:04:56.517 07:10:58 -- setup/common.sh@33 -- # return 0 00:04:56.517 07:10:58 -- setup/hugepages.sh@97 -- # anon=0 00:04:56.517 07:10:58 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:56.517 07:10:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.517 07:10:58 -- setup/common.sh@18 -- # local node= 00:04:56.517 07:10:58 -- setup/common.sh@19 -- # local var val 00:04:56.517 07:10:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.517 07:10:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.517 07:10:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.517 07:10:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.517 07:10:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.517 07:10:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.517 07:10:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6494676 kB' 'MemAvailable: 9417860 kB' 'Buffers: 3696 kB' 'Cached: 3123016 kB' 'SwapCached: 0 kB' 'Active: 495308 kB' 'Inactive: 2747040 kB' 'Active(anon): 126124 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2747040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117048 kB' 'Mapped: 50100 kB' 'Shmem: 10488 kB' 'KReclaimable: 88220 kB' 'Slab: 190592 kB' 'SReclaimable: 88220 kB' 'SUnreclaim: 102372 kB' 'KernelStack: 6648 kB' 'PageTables: 3952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 295612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55336 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.517 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.517 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.518 07:10:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.518 07:10:58 -- setup/common.sh@33 -- # echo 0 00:04:56.518 07:10:58 -- setup/common.sh@33 -- # return 0 00:04:56.518 07:10:58 -- setup/hugepages.sh@99 -- # surp=0 00:04:56.518 07:10:58 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:56.518 07:10:58 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:56.518 07:10:58 -- setup/common.sh@18 -- # local node= 00:04:56.518 07:10:58 -- setup/common.sh@19 -- # local var val 00:04:56.518 07:10:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.518 07:10:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.518 07:10:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.518 07:10:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.518 07:10:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.518 07:10:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.518 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.518 07:10:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6494676 kB' 'MemAvailable: 9417860 kB' 'Buffers: 3696 kB' 'Cached: 3123016 kB' 'SwapCached: 0 kB' 'Active: 495036 kB' 'Inactive: 2747040 kB' 'Active(anon): 125852 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2747040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116676 kB' 'Mapped: 50116 kB' 'Shmem: 10488 kB' 'KReclaimable: 88220 kB' 'Slab: 190592 kB' 'SReclaimable: 88220 kB' 'SUnreclaim: 102372 kB' 'KernelStack: 6584 kB' 'PageTables: 3756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 295612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55320 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.519 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.519 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.520 07:10:58 -- setup/common.sh@33 -- # echo 0 00:04:56.520 07:10:58 -- setup/common.sh@33 -- # return 0 00:04:56.520 07:10:58 -- setup/hugepages.sh@100 -- # resv=0 00:04:56.520 nr_hugepages=1024 00:04:56.520 07:10:58 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:56.520 resv_hugepages=0 00:04:56.520 07:10:58 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:56.520 surplus_hugepages=0 00:04:56.520 07:10:58 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:56.520 anon_hugepages=0 00:04:56.520 07:10:58 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:56.520 07:10:58 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:56.520 07:10:58 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:56.520 07:10:58 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:56.520 07:10:58 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:56.520 07:10:58 -- setup/common.sh@18 -- # local node= 00:04:56.520 07:10:58 -- setup/common.sh@19 -- # local var val 00:04:56.520 07:10:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.520 07:10:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.520 07:10:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.520 07:10:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.520 07:10:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.520 07:10:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.520 07:10:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6494676 kB' 'MemAvailable: 9417860 kB' 'Buffers: 3696 kB' 'Cached: 3123016 kB' 'SwapCached: 0 kB' 'Active: 495036 kB' 'Inactive: 2747040 kB' 'Active(anon): 125852 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2747040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116936 kB' 'Mapped: 50116 kB' 'Shmem: 10488 kB' 'KReclaimable: 88220 kB' 'Slab: 190592 kB' 'SReclaimable: 88220 kB' 'SUnreclaim: 102372 kB' 'KernelStack: 6584 kB' 'PageTables: 3756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 295612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55320 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.520 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.520 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.521 07:10:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.521 07:10:58 -- setup/common.sh@33 -- # echo 1024 00:04:56.521 07:10:58 -- setup/common.sh@33 -- # return 0 00:04:56.521 07:10:58 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:56.521 07:10:58 -- setup/hugepages.sh@112 -- # get_nodes 00:04:56.521 07:10:58 -- setup/hugepages.sh@27 -- # local node 00:04:56.521 07:10:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.521 07:10:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:56.521 07:10:58 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:56.521 07:10:58 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:56.521 07:10:58 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:56.521 07:10:58 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:56.521 07:10:58 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:56.521 07:10:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.521 07:10:58 -- setup/common.sh@18 -- # local node=0 00:04:56.521 07:10:58 -- setup/common.sh@19 -- # local var val 00:04:56.521 07:10:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.521 07:10:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.521 07:10:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:56.521 07:10:58 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:56.521 07:10:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.521 07:10:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.521 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.522 07:10:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6494676 kB' 'MemUsed: 5744440 kB' 'SwapCached: 0 kB' 'Active: 494984 kB' 'Inactive: 2747040 kB' 'Active(anon): 125800 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2747040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3126712 kB' 'Mapped: 50116 kB' 'AnonPages: 116624 kB' 'Shmem: 10488 kB' 'KernelStack: 6568 kB' 'PageTables: 3704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88220 kB' 'Slab: 190592 kB' 'SReclaimable: 88220 kB' 'SUnreclaim: 102372 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # continue 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.522 07:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.522 07:10:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.522 07:10:58 -- setup/common.sh@33 -- # echo 0 00:04:56.522 07:10:58 -- setup/common.sh@33 -- # return 0 00:04:56.522 07:10:58 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:56.522 07:10:58 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:56.522 07:10:58 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:56.522 07:10:58 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:56.522 07:10:58 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:56.522 node0=1024 expecting 1024 00:04:56.522 07:10:58 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:56.522 00:04:56.522 real 0m1.157s 00:04:56.522 user 0m0.562s 00:04:56.522 sys 0m0.649s 00:04:56.522 07:10:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.522 07:10:58 -- common/autotest_common.sh@10 -- # set +x 00:04:56.522 ************************************ 00:04:56.522 END TEST no_shrink_alloc 00:04:56.522 ************************************ 00:04:56.522 07:10:58 -- setup/hugepages.sh@217 -- # clear_hp 00:04:56.522 07:10:58 -- setup/hugepages.sh@37 -- # local node hp 00:04:56.522 07:10:58 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:56.522 07:10:58 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:56.523 07:10:58 -- setup/hugepages.sh@41 -- # echo 0 00:04:56.523 07:10:58 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:56.523 07:10:58 -- setup/hugepages.sh@41 -- # echo 0 00:04:56.781 07:10:58 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:56.781 07:10:58 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:56.781 00:04:56.781 real 0m5.204s 00:04:56.781 user 0m2.378s 00:04:56.781 sys 0m2.796s 00:04:56.781 07:10:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.781 07:10:58 -- common/autotest_common.sh@10 -- # set +x 00:04:56.781 ************************************ 00:04:56.781 END TEST hugepages 00:04:56.781 ************************************ 00:04:56.781 07:10:58 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:56.781 07:10:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:56.781 07:10:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:56.781 07:10:58 -- common/autotest_common.sh@10 -- # set +x 00:04:56.781 ************************************ 00:04:56.781 START TEST driver 00:04:56.781 ************************************ 00:04:56.781 07:10:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:56.781 * Looking for test storage... 00:04:56.781 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:56.781 07:10:58 -- setup/driver.sh@68 -- # setup reset 00:04:56.781 07:10:58 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:56.781 07:10:58 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:57.348 07:10:59 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:57.348 07:10:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:57.348 07:10:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:57.348 07:10:59 -- common/autotest_common.sh@10 -- # set +x 00:04:57.348 ************************************ 00:04:57.348 START TEST guess_driver 00:04:57.348 ************************************ 00:04:57.348 07:10:59 -- common/autotest_common.sh@1104 -- # guess_driver 00:04:57.348 07:10:59 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:57.348 07:10:59 -- setup/driver.sh@47 -- # local fail=0 00:04:57.348 07:10:59 -- setup/driver.sh@49 -- # pick_driver 00:04:57.348 07:10:59 -- setup/driver.sh@36 -- # vfio 00:04:57.348 07:10:59 -- setup/driver.sh@21 -- # local iommu_grups 00:04:57.348 07:10:59 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:57.348 07:10:59 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:57.348 07:10:59 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:57.348 07:10:59 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:57.348 07:10:59 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:57.348 07:10:59 -- setup/driver.sh@32 -- # return 1 00:04:57.348 07:10:59 -- setup/driver.sh@38 -- # uio 00:04:57.348 07:10:59 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:57.348 07:10:59 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:57.348 07:10:59 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:57.348 07:10:59 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:57.348 07:10:59 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:57.348 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:57.348 07:10:59 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:57.348 07:10:59 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:57.348 07:10:59 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:57.348 Looking for driver=uio_pci_generic 00:04:57.348 07:10:59 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:57.348 07:10:59 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:57.348 07:10:59 -- setup/driver.sh@45 -- # setup output config 00:04:57.348 07:10:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.348 07:10:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:58.285 07:10:59 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:58.285 07:10:59 -- setup/driver.sh@58 -- # continue 00:04:58.285 07:10:59 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.285 07:10:59 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:58.285 07:10:59 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:58.285 07:10:59 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.285 07:10:59 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:58.285 07:10:59 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:58.285 07:10:59 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.285 07:11:00 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:58.285 07:11:00 -- setup/driver.sh@65 -- # setup reset 00:04:58.285 07:11:00 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:58.285 07:11:00 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:58.851 00:04:58.851 real 0m1.559s 00:04:58.851 user 0m0.597s 00:04:58.851 sys 0m0.966s 00:04:58.851 07:11:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.851 ************************************ 00:04:58.851 END TEST guess_driver 00:04:58.851 07:11:00 -- common/autotest_common.sh@10 -- # set +x 00:04:58.851 ************************************ 00:04:59.109 00:04:59.109 real 0m2.279s 00:04:59.109 user 0m0.853s 00:04:59.109 sys 0m1.504s 00:04:59.109 07:11:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.109 ************************************ 00:04:59.109 END TEST driver 00:04:59.109 07:11:00 -- common/autotest_common.sh@10 -- # set +x 00:04:59.109 ************************************ 00:04:59.109 07:11:00 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:59.109 07:11:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:59.109 07:11:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:59.109 07:11:00 -- common/autotest_common.sh@10 -- # set +x 00:04:59.109 ************************************ 00:04:59.109 START TEST devices 00:04:59.109 ************************************ 00:04:59.109 07:11:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:59.109 * Looking for test storage... 00:04:59.109 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:59.109 07:11:00 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:59.109 07:11:00 -- setup/devices.sh@192 -- # setup reset 00:04:59.109 07:11:00 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:59.109 07:11:00 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:00.044 07:11:01 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:00.044 07:11:01 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:00.044 07:11:01 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:00.044 07:11:01 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:00.044 07:11:01 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:00.044 07:11:01 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:00.044 07:11:01 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:00.044 07:11:01 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:00.044 07:11:01 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:00.044 07:11:01 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:00.045 07:11:01 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:05:00.045 07:11:01 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:05:00.045 07:11:01 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:00.045 07:11:01 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:00.045 07:11:01 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:00.045 07:11:01 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:05:00.045 07:11:01 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:05:00.045 07:11:01 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:00.045 07:11:01 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:00.045 07:11:01 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:00.045 07:11:01 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:05:00.045 07:11:01 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:05:00.045 07:11:01 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:00.045 07:11:01 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:00.045 07:11:01 -- setup/devices.sh@196 -- # blocks=() 00:05:00.045 07:11:01 -- setup/devices.sh@196 -- # declare -a blocks 00:05:00.045 07:11:01 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:00.045 07:11:01 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:00.045 07:11:01 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:00.045 07:11:01 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:00.045 07:11:01 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:00.045 07:11:01 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:00.045 07:11:01 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:00.045 07:11:01 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:00.045 07:11:01 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:00.045 07:11:01 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:00.045 07:11:01 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:00.045 No valid GPT data, bailing 00:05:00.045 07:11:01 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:00.045 07:11:01 -- scripts/common.sh@393 -- # pt= 00:05:00.045 07:11:01 -- scripts/common.sh@394 -- # return 1 00:05:00.045 07:11:01 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:00.045 07:11:01 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:00.045 07:11:01 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:00.045 07:11:01 -- setup/common.sh@80 -- # echo 5368709120 00:05:00.045 07:11:01 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:00.045 07:11:01 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:00.045 07:11:01 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:00.045 07:11:01 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:00.045 07:11:01 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:00.045 07:11:01 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:00.045 07:11:01 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:00.045 07:11:01 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:00.045 07:11:01 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:00.045 07:11:01 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:05:00.045 07:11:01 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:00.045 No valid GPT data, bailing 00:05:00.045 07:11:01 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:00.045 07:11:01 -- scripts/common.sh@393 -- # pt= 00:05:00.045 07:11:01 -- scripts/common.sh@394 -- # return 1 00:05:00.045 07:11:01 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:00.045 07:11:01 -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:00.045 07:11:01 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:00.045 07:11:01 -- setup/common.sh@80 -- # echo 4294967296 00:05:00.045 07:11:01 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:00.045 07:11:01 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:00.045 07:11:01 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:00.045 07:11:01 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:00.045 07:11:01 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:05:00.045 07:11:01 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:00.045 07:11:01 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:00.045 07:11:01 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:00.045 07:11:01 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:05:00.045 07:11:01 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:05:00.045 07:11:01 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:05:00.045 No valid GPT data, bailing 00:05:00.045 07:11:01 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:00.045 07:11:01 -- scripts/common.sh@393 -- # pt= 00:05:00.045 07:11:01 -- scripts/common.sh@394 -- # return 1 00:05:00.045 07:11:01 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:05:00.045 07:11:01 -- setup/common.sh@76 -- # local dev=nvme1n2 00:05:00.045 07:11:01 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:05:00.045 07:11:01 -- setup/common.sh@80 -- # echo 4294967296 00:05:00.045 07:11:01 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:00.045 07:11:01 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:00.045 07:11:01 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:00.045 07:11:01 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:00.045 07:11:01 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:05:00.045 07:11:01 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:00.045 07:11:01 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:00.045 07:11:01 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:00.045 07:11:01 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:05:00.045 07:11:01 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:05:00.045 07:11:01 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:05:00.304 No valid GPT data, bailing 00:05:00.304 07:11:01 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:00.304 07:11:01 -- scripts/common.sh@393 -- # pt= 00:05:00.304 07:11:01 -- scripts/common.sh@394 -- # return 1 00:05:00.304 07:11:01 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:05:00.304 07:11:01 -- setup/common.sh@76 -- # local dev=nvme1n3 00:05:00.304 07:11:01 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:05:00.304 07:11:01 -- setup/common.sh@80 -- # echo 4294967296 00:05:00.304 07:11:01 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:00.304 07:11:01 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:00.304 07:11:01 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:00.304 07:11:01 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:00.304 07:11:01 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:00.304 07:11:01 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:00.304 07:11:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:00.304 07:11:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:00.304 07:11:01 -- common/autotest_common.sh@10 -- # set +x 00:05:00.304 ************************************ 00:05:00.304 START TEST nvme_mount 00:05:00.304 ************************************ 00:05:00.304 07:11:01 -- common/autotest_common.sh@1104 -- # nvme_mount 00:05:00.304 07:11:01 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:00.304 07:11:01 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:00.304 07:11:01 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:00.304 07:11:01 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:00.304 07:11:01 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:00.304 07:11:01 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:00.304 07:11:01 -- setup/common.sh@40 -- # local part_no=1 00:05:00.304 07:11:01 -- setup/common.sh@41 -- # local size=1073741824 00:05:00.304 07:11:01 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:00.304 07:11:01 -- setup/common.sh@44 -- # parts=() 00:05:00.304 07:11:01 -- setup/common.sh@44 -- # local parts 00:05:00.304 07:11:01 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:00.304 07:11:01 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:00.304 07:11:01 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:00.304 07:11:01 -- setup/common.sh@46 -- # (( part++ )) 00:05:00.304 07:11:01 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:00.304 07:11:01 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:00.304 07:11:01 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:00.304 07:11:01 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:01.266 Creating new GPT entries in memory. 00:05:01.266 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:01.266 other utilities. 00:05:01.266 07:11:02 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:01.266 07:11:02 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:01.266 07:11:02 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:01.266 07:11:02 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:01.266 07:11:02 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:02.202 Creating new GPT entries in memory. 00:05:02.202 The operation has completed successfully. 00:05:02.202 07:11:04 -- setup/common.sh@57 -- # (( part++ )) 00:05:02.202 07:11:04 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:02.202 07:11:04 -- setup/common.sh@62 -- # wait 65911 00:05:02.461 07:11:04 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:02.461 07:11:04 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:02.461 07:11:04 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:02.461 07:11:04 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:02.461 07:11:04 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:02.461 07:11:04 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:02.461 07:11:04 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:02.461 07:11:04 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:02.461 07:11:04 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:02.461 07:11:04 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:02.461 07:11:04 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:02.461 07:11:04 -- setup/devices.sh@53 -- # local found=0 00:05:02.461 07:11:04 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:02.461 07:11:04 -- setup/devices.sh@56 -- # : 00:05:02.461 07:11:04 -- setup/devices.sh@59 -- # local pci status 00:05:02.461 07:11:04 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:02.461 07:11:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.461 07:11:04 -- setup/devices.sh@47 -- # setup output config 00:05:02.461 07:11:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.461 07:11:04 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:02.461 07:11:04 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:02.461 07:11:04 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:02.461 07:11:04 -- setup/devices.sh@63 -- # found=1 00:05:02.461 07:11:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.461 07:11:04 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:02.461 07:11:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.027 07:11:04 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:03.027 07:11:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.027 07:11:04 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:03.027 07:11:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.027 07:11:04 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:03.028 07:11:04 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:03.028 07:11:04 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:03.028 07:11:04 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:03.028 07:11:04 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:03.028 07:11:04 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:03.028 07:11:04 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:03.028 07:11:04 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:03.028 07:11:04 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:03.028 07:11:04 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:03.028 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:03.028 07:11:04 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:03.028 07:11:04 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:03.286 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:03.286 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:03.286 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:03.286 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:03.286 07:11:05 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:03.286 07:11:05 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:03.286 07:11:05 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:03.286 07:11:05 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:03.286 07:11:05 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:03.545 07:11:05 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:03.545 07:11:05 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:03.545 07:11:05 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:03.545 07:11:05 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:03.545 07:11:05 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:03.545 07:11:05 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:03.545 07:11:05 -- setup/devices.sh@53 -- # local found=0 00:05:03.545 07:11:05 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:03.545 07:11:05 -- setup/devices.sh@56 -- # : 00:05:03.545 07:11:05 -- setup/devices.sh@59 -- # local pci status 00:05:03.545 07:11:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.545 07:11:05 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:03.545 07:11:05 -- setup/devices.sh@47 -- # setup output config 00:05:03.545 07:11:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.545 07:11:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:03.545 07:11:05 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:03.545 07:11:05 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:03.545 07:11:05 -- setup/devices.sh@63 -- # found=1 00:05:03.545 07:11:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.545 07:11:05 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:03.545 07:11:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.112 07:11:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:04.112 07:11:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.112 07:11:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:04.112 07:11:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.112 07:11:05 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:04.112 07:11:05 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:04.112 07:11:05 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:04.112 07:11:05 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:04.112 07:11:05 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:04.112 07:11:05 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:04.112 07:11:05 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:05:04.112 07:11:05 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:04.112 07:11:05 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:04.112 07:11:05 -- setup/devices.sh@50 -- # local mount_point= 00:05:04.112 07:11:05 -- setup/devices.sh@51 -- # local test_file= 00:05:04.112 07:11:05 -- setup/devices.sh@53 -- # local found=0 00:05:04.112 07:11:05 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:04.112 07:11:05 -- setup/devices.sh@59 -- # local pci status 00:05:04.112 07:11:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.112 07:11:05 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:04.112 07:11:05 -- setup/devices.sh@47 -- # setup output config 00:05:04.112 07:11:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.112 07:11:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:04.370 07:11:06 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:04.370 07:11:06 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:04.370 07:11:06 -- setup/devices.sh@63 -- # found=1 00:05:04.370 07:11:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.370 07:11:06 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:04.370 07:11:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.938 07:11:06 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:04.938 07:11:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.938 07:11:06 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:04.938 07:11:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.938 07:11:06 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:04.938 07:11:06 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:04.938 07:11:06 -- setup/devices.sh@68 -- # return 0 00:05:04.938 07:11:06 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:04.938 07:11:06 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:04.938 07:11:06 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:04.938 07:11:06 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:04.938 07:11:06 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:04.938 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:04.938 00:05:04.938 real 0m4.712s 00:05:04.938 user 0m1.080s 00:05:04.938 sys 0m1.299s 00:05:04.938 07:11:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.938 07:11:06 -- common/autotest_common.sh@10 -- # set +x 00:05:04.938 ************************************ 00:05:04.938 END TEST nvme_mount 00:05:04.938 ************************************ 00:05:04.938 07:11:06 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:04.938 07:11:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:04.938 07:11:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:04.938 07:11:06 -- common/autotest_common.sh@10 -- # set +x 00:05:04.938 ************************************ 00:05:04.938 START TEST dm_mount 00:05:04.938 ************************************ 00:05:04.938 07:11:06 -- common/autotest_common.sh@1104 -- # dm_mount 00:05:04.938 07:11:06 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:04.938 07:11:06 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:04.938 07:11:06 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:04.938 07:11:06 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:04.938 07:11:06 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:04.938 07:11:06 -- setup/common.sh@40 -- # local part_no=2 00:05:04.938 07:11:06 -- setup/common.sh@41 -- # local size=1073741824 00:05:04.938 07:11:06 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:04.938 07:11:06 -- setup/common.sh@44 -- # parts=() 00:05:04.938 07:11:06 -- setup/common.sh@44 -- # local parts 00:05:04.938 07:11:06 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:04.938 07:11:06 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:04.938 07:11:06 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:04.938 07:11:06 -- setup/common.sh@46 -- # (( part++ )) 00:05:04.938 07:11:06 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:04.938 07:11:06 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:04.938 07:11:06 -- setup/common.sh@46 -- # (( part++ )) 00:05:04.938 07:11:06 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:04.938 07:11:06 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:04.938 07:11:06 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:04.938 07:11:06 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:06.314 Creating new GPT entries in memory. 00:05:06.314 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:06.314 other utilities. 00:05:06.314 07:11:07 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:06.314 07:11:07 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:06.314 07:11:07 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:06.314 07:11:07 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:06.314 07:11:07 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:07.249 Creating new GPT entries in memory. 00:05:07.249 The operation has completed successfully. 00:05:07.249 07:11:08 -- setup/common.sh@57 -- # (( part++ )) 00:05:07.249 07:11:08 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:07.249 07:11:08 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:07.249 07:11:08 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:07.249 07:11:08 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:08.185 The operation has completed successfully. 00:05:08.185 07:11:09 -- setup/common.sh@57 -- # (( part++ )) 00:05:08.185 07:11:09 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:08.185 07:11:09 -- setup/common.sh@62 -- # wait 66372 00:05:08.185 07:11:09 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:08.185 07:11:09 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:08.185 07:11:09 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:08.185 07:11:09 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:08.185 07:11:09 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:08.185 07:11:09 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:08.185 07:11:09 -- setup/devices.sh@161 -- # break 00:05:08.185 07:11:09 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:08.185 07:11:09 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:08.185 07:11:09 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:08.185 07:11:09 -- setup/devices.sh@166 -- # dm=dm-0 00:05:08.185 07:11:09 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:08.185 07:11:09 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:08.185 07:11:09 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:08.185 07:11:09 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:08.185 07:11:09 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:08.185 07:11:09 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:08.185 07:11:09 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:08.185 07:11:09 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:08.185 07:11:09 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:08.185 07:11:09 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:08.185 07:11:09 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:08.185 07:11:09 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:08.185 07:11:09 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:08.185 07:11:09 -- setup/devices.sh@53 -- # local found=0 00:05:08.185 07:11:09 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:08.185 07:11:09 -- setup/devices.sh@56 -- # : 00:05:08.185 07:11:09 -- setup/devices.sh@59 -- # local pci status 00:05:08.185 07:11:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.185 07:11:09 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:08.185 07:11:09 -- setup/devices.sh@47 -- # setup output config 00:05:08.185 07:11:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:08.185 07:11:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:08.443 07:11:10 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:08.443 07:11:10 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:08.443 07:11:10 -- setup/devices.sh@63 -- # found=1 00:05:08.443 07:11:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.443 07:11:10 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:08.443 07:11:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.702 07:11:10 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:08.702 07:11:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.961 07:11:10 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:08.961 07:11:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.961 07:11:10 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:08.961 07:11:10 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:08.961 07:11:10 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:08.961 07:11:10 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:08.961 07:11:10 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:08.961 07:11:10 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:08.961 07:11:10 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:08.961 07:11:10 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:08.961 07:11:10 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:08.961 07:11:10 -- setup/devices.sh@50 -- # local mount_point= 00:05:08.961 07:11:10 -- setup/devices.sh@51 -- # local test_file= 00:05:08.961 07:11:10 -- setup/devices.sh@53 -- # local found=0 00:05:08.961 07:11:10 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:08.961 07:11:10 -- setup/devices.sh@59 -- # local pci status 00:05:08.961 07:11:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.961 07:11:10 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:08.961 07:11:10 -- setup/devices.sh@47 -- # setup output config 00:05:08.961 07:11:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:08.961 07:11:10 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:09.220 07:11:10 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:09.220 07:11:10 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:09.220 07:11:10 -- setup/devices.sh@63 -- # found=1 00:05:09.220 07:11:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.220 07:11:10 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:09.220 07:11:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.478 07:11:11 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:09.478 07:11:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.478 07:11:11 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:09.478 07:11:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.738 07:11:11 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:09.738 07:11:11 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:09.738 07:11:11 -- setup/devices.sh@68 -- # return 0 00:05:09.738 07:11:11 -- setup/devices.sh@187 -- # cleanup_dm 00:05:09.738 07:11:11 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:09.738 07:11:11 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:09.738 07:11:11 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:09.738 07:11:11 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:09.738 07:11:11 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:09.738 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:09.738 07:11:11 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:09.738 07:11:11 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:09.738 00:05:09.738 real 0m4.715s 00:05:09.738 user 0m0.718s 00:05:09.738 sys 0m0.904s 00:05:09.738 07:11:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.738 07:11:11 -- common/autotest_common.sh@10 -- # set +x 00:05:09.738 ************************************ 00:05:09.738 END TEST dm_mount 00:05:09.738 ************************************ 00:05:09.738 07:11:11 -- setup/devices.sh@1 -- # cleanup 00:05:09.738 07:11:11 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:09.738 07:11:11 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:09.738 07:11:11 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:09.738 07:11:11 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:09.738 07:11:11 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:09.738 07:11:11 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:09.997 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:09.997 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:09.997 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:09.997 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:09.997 07:11:11 -- setup/devices.sh@12 -- # cleanup_dm 00:05:09.997 07:11:11 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:09.997 07:11:11 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:09.997 07:11:11 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:09.997 07:11:11 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:09.997 07:11:11 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:09.997 07:11:11 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:09.997 00:05:09.997 real 0m11.023s 00:05:09.997 user 0m2.473s 00:05:09.997 sys 0m2.830s 00:05:09.997 07:11:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.997 ************************************ 00:05:09.997 07:11:11 -- common/autotest_common.sh@10 -- # set +x 00:05:09.997 END TEST devices 00:05:09.997 ************************************ 00:05:09.997 00:05:09.997 real 0m23.365s 00:05:09.997 user 0m7.766s 00:05:09.997 sys 0m9.900s 00:05:09.997 07:11:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.997 07:11:11 -- common/autotest_common.sh@10 -- # set +x 00:05:09.997 ************************************ 00:05:09.997 END TEST setup.sh 00:05:09.997 ************************************ 00:05:10.256 07:11:11 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:10.256 Hugepages 00:05:10.256 node hugesize free / total 00:05:10.256 node0 1048576kB 0 / 0 00:05:10.256 node0 2048kB 2048 / 2048 00:05:10.256 00:05:10.256 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:10.256 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:10.514 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:10.514 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:10.514 07:11:12 -- spdk/autotest.sh@141 -- # uname -s 00:05:10.514 07:11:12 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:05:10.514 07:11:12 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:05:10.514 07:11:12 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:11.450 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:11.450 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:11.450 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:11.450 07:11:13 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:12.386 07:11:14 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:12.386 07:11:14 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:12.386 07:11:14 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:12.386 07:11:14 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:12.386 07:11:14 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:12.386 07:11:14 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:12.386 07:11:14 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:12.386 07:11:14 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:12.386 07:11:14 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:12.386 07:11:14 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:12.386 07:11:14 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:12.386 07:11:14 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:12.954 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:12.954 Waiting for block devices as requested 00:05:12.954 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:12.954 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:13.212 07:11:14 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:13.213 07:11:14 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:13.213 07:11:14 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:13.213 07:11:14 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:05:13.213 07:11:14 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:13.213 07:11:14 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:13.213 07:11:14 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:13.213 07:11:14 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:13.213 07:11:14 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:13.213 07:11:14 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:13.213 07:11:14 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:13.213 07:11:14 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:13.213 07:11:14 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:13.213 07:11:14 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:13.213 07:11:14 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:13.213 07:11:14 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:13.213 07:11:14 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:13.213 07:11:14 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:13.213 07:11:14 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:13.213 07:11:14 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:13.213 07:11:14 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:13.213 07:11:14 -- common/autotest_common.sh@1542 -- # continue 00:05:13.213 07:11:14 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:13.213 07:11:14 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:13.213 07:11:14 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:13.213 07:11:14 -- common/autotest_common.sh@1487 -- # grep 0000:00:07.0/nvme/nvme 00:05:13.213 07:11:14 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:13.213 07:11:14 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:05:13.213 07:11:14 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:13.213 07:11:14 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:13.213 07:11:14 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme1 00:05:13.213 07:11:14 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme1 ]] 00:05:13.213 07:11:14 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:13.213 07:11:14 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme1 00:05:13.213 07:11:14 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:13.213 07:11:14 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:13.213 07:11:14 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:13.213 07:11:14 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:13.213 07:11:14 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:13.213 07:11:14 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme1 00:05:13.213 07:11:14 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:13.213 07:11:14 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:13.213 07:11:14 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:13.213 07:11:14 -- common/autotest_common.sh@1542 -- # continue 00:05:13.213 07:11:14 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:13.213 07:11:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:13.213 07:11:14 -- common/autotest_common.sh@10 -- # set +x 00:05:13.213 07:11:14 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:13.213 07:11:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:13.213 07:11:14 -- common/autotest_common.sh@10 -- # set +x 00:05:13.213 07:11:14 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:13.780 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:14.038 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:14.038 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:14.039 07:11:15 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:14.039 07:11:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:14.039 07:11:15 -- common/autotest_common.sh@10 -- # set +x 00:05:14.039 07:11:15 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:14.298 07:11:15 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:14.298 07:11:15 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:14.298 07:11:15 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:14.298 07:11:15 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:14.298 07:11:15 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:14.298 07:11:15 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:14.298 07:11:15 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:14.298 07:11:15 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:14.298 07:11:15 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:14.298 07:11:15 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:14.298 07:11:15 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:14.298 07:11:15 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:14.298 07:11:15 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:14.298 07:11:15 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:14.298 07:11:15 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:14.298 07:11:15 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:14.298 07:11:15 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:14.298 07:11:15 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:14.298 07:11:15 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:14.298 07:11:15 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:14.298 07:11:15 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:05:14.298 07:11:15 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:14.298 07:11:15 -- common/autotest_common.sh@1578 -- # return 0 00:05:14.298 07:11:15 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:05:14.298 07:11:15 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:05:14.298 07:11:15 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:14.298 07:11:15 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:14.298 07:11:15 -- spdk/autotest.sh@173 -- # timing_enter lib 00:05:14.298 07:11:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:14.298 07:11:15 -- common/autotest_common.sh@10 -- # set +x 00:05:14.298 07:11:15 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:14.298 07:11:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:14.298 07:11:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:14.298 07:11:15 -- common/autotest_common.sh@10 -- # set +x 00:05:14.298 ************************************ 00:05:14.298 START TEST env 00:05:14.298 ************************************ 00:05:14.298 07:11:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:14.298 * Looking for test storage... 00:05:14.298 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:14.298 07:11:16 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:14.298 07:11:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:14.298 07:11:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:14.298 07:11:16 -- common/autotest_common.sh@10 -- # set +x 00:05:14.298 ************************************ 00:05:14.298 START TEST env_memory 00:05:14.298 ************************************ 00:05:14.298 07:11:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:14.298 00:05:14.298 00:05:14.298 CUnit - A unit testing framework for C - Version 2.1-3 00:05:14.298 http://cunit.sourceforge.net/ 00:05:14.298 00:05:14.298 00:05:14.298 Suite: memory 00:05:14.298 Test: alloc and free memory map ...[2024-11-04 07:11:16.130778] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:14.557 passed 00:05:14.557 Test: mem map translation ...[2024-11-04 07:11:16.161832] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:14.557 [2024-11-04 07:11:16.161882] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:14.557 [2024-11-04 07:11:16.161938] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:14.557 [2024-11-04 07:11:16.161949] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:14.557 passed 00:05:14.557 Test: mem map registration ...[2024-11-04 07:11:16.225639] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:14.557 [2024-11-04 07:11:16.225675] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:14.557 passed 00:05:14.557 Test: mem map adjacent registrations ...passed 00:05:14.557 00:05:14.557 Run Summary: Type Total Ran Passed Failed Inactive 00:05:14.557 suites 1 1 n/a 0 0 00:05:14.557 tests 4 4 4 0 0 00:05:14.557 asserts 152 152 152 0 n/a 00:05:14.557 00:05:14.557 Elapsed time = 0.213 seconds 00:05:14.557 00:05:14.557 real 0m0.232s 00:05:14.557 user 0m0.213s 00:05:14.557 sys 0m0.016s 00:05:14.557 07:11:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.557 07:11:16 -- common/autotest_common.sh@10 -- # set +x 00:05:14.557 ************************************ 00:05:14.557 END TEST env_memory 00:05:14.557 ************************************ 00:05:14.557 07:11:16 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:14.557 07:11:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:14.557 07:11:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:14.557 07:11:16 -- common/autotest_common.sh@10 -- # set +x 00:05:14.557 ************************************ 00:05:14.557 START TEST env_vtophys 00:05:14.557 ************************************ 00:05:14.557 07:11:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:14.557 EAL: lib.eal log level changed from notice to debug 00:05:14.557 EAL: Detected lcore 0 as core 0 on socket 0 00:05:14.557 EAL: Detected lcore 1 as core 0 on socket 0 00:05:14.557 EAL: Detected lcore 2 as core 0 on socket 0 00:05:14.557 EAL: Detected lcore 3 as core 0 on socket 0 00:05:14.557 EAL: Detected lcore 4 as core 0 on socket 0 00:05:14.557 EAL: Detected lcore 5 as core 0 on socket 0 00:05:14.557 EAL: Detected lcore 6 as core 0 on socket 0 00:05:14.557 EAL: Detected lcore 7 as core 0 on socket 0 00:05:14.557 EAL: Detected lcore 8 as core 0 on socket 0 00:05:14.557 EAL: Detected lcore 9 as core 0 on socket 0 00:05:14.557 EAL: Maximum logical cores by configuration: 128 00:05:14.557 EAL: Detected CPU lcores: 10 00:05:14.557 EAL: Detected NUMA nodes: 1 00:05:14.557 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:14.557 EAL: Detected shared linkage of DPDK 00:05:14.557 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:14.557 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:14.557 EAL: Registered [vdev] bus. 00:05:14.557 EAL: bus.vdev log level changed from disabled to notice 00:05:14.557 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:14.557 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:14.557 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:14.557 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:14.557 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:14.557 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:14.557 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:14.557 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:14.557 EAL: No shared files mode enabled, IPC will be disabled 00:05:14.557 EAL: No shared files mode enabled, IPC is disabled 00:05:14.557 EAL: Selected IOVA mode 'PA' 00:05:14.557 EAL: Probing VFIO support... 00:05:14.557 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:14.557 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:14.557 EAL: Ask a virtual area of 0x2e000 bytes 00:05:14.557 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:14.816 EAL: Setting up physically contiguous memory... 00:05:14.817 EAL: Setting maximum number of open files to 524288 00:05:14.817 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:14.817 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:14.817 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.817 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:14.817 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:14.817 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.817 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:14.817 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:14.817 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.817 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:14.817 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:14.817 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.817 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:14.817 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:14.817 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.817 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:14.817 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:14.817 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.817 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:14.817 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:14.817 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.817 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:14.817 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:14.817 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.817 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:14.817 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:14.817 EAL: Hugepages will be freed exactly as allocated. 00:05:14.817 EAL: No shared files mode enabled, IPC is disabled 00:05:14.817 EAL: No shared files mode enabled, IPC is disabled 00:05:14.817 EAL: TSC frequency is ~2200000 KHz 00:05:14.817 EAL: Main lcore 0 is ready (tid=7f7ee0812a00;cpuset=[0]) 00:05:14.817 EAL: Trying to obtain current memory policy. 00:05:14.817 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.817 EAL: Restoring previous memory policy: 0 00:05:14.817 EAL: request: mp_malloc_sync 00:05:14.817 EAL: No shared files mode enabled, IPC is disabled 00:05:14.817 EAL: Heap on socket 0 was expanded by 2MB 00:05:14.817 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:14.817 EAL: No shared files mode enabled, IPC is disabled 00:05:14.817 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:14.817 EAL: Mem event callback 'spdk:(nil)' registered 00:05:14.817 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:14.817 00:05:14.817 00:05:14.817 CUnit - A unit testing framework for C - Version 2.1-3 00:05:14.817 http://cunit.sourceforge.net/ 00:05:14.817 00:05:14.817 00:05:14.817 Suite: components_suite 00:05:14.817 Test: vtophys_malloc_test ...passed 00:05:14.817 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:14.817 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.817 EAL: Restoring previous memory policy: 4 00:05:14.817 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.817 EAL: request: mp_malloc_sync 00:05:14.817 EAL: No shared files mode enabled, IPC is disabled 00:05:14.817 EAL: Heap on socket 0 was expanded by 4MB 00:05:14.817 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.817 EAL: request: mp_malloc_sync 00:05:14.817 EAL: No shared files mode enabled, IPC is disabled 00:05:14.817 EAL: Heap on socket 0 was shrunk by 4MB 00:05:14.817 EAL: Trying to obtain current memory policy. 00:05:14.817 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.817 EAL: Restoring previous memory policy: 4 00:05:14.817 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.817 EAL: request: mp_malloc_sync 00:05:14.817 EAL: No shared files mode enabled, IPC is disabled 00:05:14.817 EAL: Heap on socket 0 was expanded by 6MB 00:05:14.817 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.817 EAL: request: mp_malloc_sync 00:05:14.817 EAL: No shared files mode enabled, IPC is disabled 00:05:14.817 EAL: Heap on socket 0 was shrunk by 6MB 00:05:14.817 EAL: Trying to obtain current memory policy. 00:05:14.817 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.817 EAL: Restoring previous memory policy: 4 00:05:14.817 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.817 EAL: request: mp_malloc_sync 00:05:14.817 EAL: No shared files mode enabled, IPC is disabled 00:05:14.817 EAL: Heap on socket 0 was expanded by 10MB 00:05:14.817 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.817 EAL: request: mp_malloc_sync 00:05:14.817 EAL: No shared files mode enabled, IPC is disabled 00:05:14.817 EAL: Heap on socket 0 was shrunk by 10MB 00:05:14.817 EAL: Trying to obtain current memory policy. 00:05:14.817 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.817 EAL: Restoring previous memory policy: 4 00:05:14.817 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.817 EAL: request: mp_malloc_sync 00:05:14.817 EAL: No shared files mode enabled, IPC is disabled 00:05:14.817 EAL: Heap on socket 0 was expanded by 18MB 00:05:14.817 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.817 EAL: request: mp_malloc_sync 00:05:14.817 EAL: No shared files mode enabled, IPC is disabled 00:05:14.817 EAL: Heap on socket 0 was shrunk by 18MB 00:05:14.817 EAL: Trying to obtain current memory policy. 00:05:14.817 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.817 EAL: Restoring previous memory policy: 4 00:05:14.817 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.817 EAL: request: mp_malloc_sync 00:05:14.817 EAL: No shared files mode enabled, IPC is disabled 00:05:14.817 EAL: Heap on socket 0 was expanded by 34MB 00:05:14.817 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.817 EAL: request: mp_malloc_sync 00:05:14.817 EAL: No shared files mode enabled, IPC is disabled 00:05:14.817 EAL: Heap on socket 0 was shrunk by 34MB 00:05:14.817 EAL: Trying to obtain current memory policy. 00:05:14.817 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.817 EAL: Restoring previous memory policy: 4 00:05:14.817 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.817 EAL: request: mp_malloc_sync 00:05:14.817 EAL: No shared files mode enabled, IPC is disabled 00:05:14.817 EAL: Heap on socket 0 was expanded by 66MB 00:05:14.817 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.817 EAL: request: mp_malloc_sync 00:05:14.817 EAL: No shared files mode enabled, IPC is disabled 00:05:14.817 EAL: Heap on socket 0 was shrunk by 66MB 00:05:14.817 EAL: Trying to obtain current memory policy. 00:05:14.817 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.817 EAL: Restoring previous memory policy: 4 00:05:14.817 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.817 EAL: request: mp_malloc_sync 00:05:14.817 EAL: No shared files mode enabled, IPC is disabled 00:05:14.817 EAL: Heap on socket 0 was expanded by 130MB 00:05:14.817 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.076 EAL: request: mp_malloc_sync 00:05:15.076 EAL: No shared files mode enabled, IPC is disabled 00:05:15.076 EAL: Heap on socket 0 was shrunk by 130MB 00:05:15.076 EAL: Trying to obtain current memory policy. 00:05:15.076 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.076 EAL: Restoring previous memory policy: 4 00:05:15.076 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.076 EAL: request: mp_malloc_sync 00:05:15.076 EAL: No shared files mode enabled, IPC is disabled 00:05:15.076 EAL: Heap on socket 0 was expanded by 258MB 00:05:15.076 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.076 EAL: request: mp_malloc_sync 00:05:15.076 EAL: No shared files mode enabled, IPC is disabled 00:05:15.076 EAL: Heap on socket 0 was shrunk by 258MB 00:05:15.076 EAL: Trying to obtain current memory policy. 00:05:15.076 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.335 EAL: Restoring previous memory policy: 4 00:05:15.335 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.335 EAL: request: mp_malloc_sync 00:05:15.335 EAL: No shared files mode enabled, IPC is disabled 00:05:15.335 EAL: Heap on socket 0 was expanded by 514MB 00:05:15.335 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.335 EAL: request: mp_malloc_sync 00:05:15.335 EAL: No shared files mode enabled, IPC is disabled 00:05:15.335 EAL: Heap on socket 0 was shrunk by 514MB 00:05:15.335 EAL: Trying to obtain current memory policy. 00:05:15.335 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.594 EAL: Restoring previous memory policy: 4 00:05:15.594 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.594 EAL: request: mp_malloc_sync 00:05:15.594 EAL: No shared files mode enabled, IPC is disabled 00:05:15.594 EAL: Heap on socket 0 was expanded by 1026MB 00:05:15.853 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.112 passed 00:05:16.112 00:05:16.112 Run Summary: Type Total Ran Passed Failed Inactive 00:05:16.112 suites 1 1 n/a 0 0 00:05:16.112 tests 2 2 2 0 0 00:05:16.112 asserts 5463 5463 5463 0 n/a 00:05:16.112 00:05:16.112 Elapsed time = 1.235 seconds 00:05:16.112 EAL: request: mp_malloc_sync 00:05:16.112 EAL: No shared files mode enabled, IPC is disabled 00:05:16.112 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:16.112 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.112 EAL: request: mp_malloc_sync 00:05:16.112 EAL: No shared files mode enabled, IPC is disabled 00:05:16.112 EAL: Heap on socket 0 was shrunk by 2MB 00:05:16.112 EAL: No shared files mode enabled, IPC is disabled 00:05:16.112 EAL: No shared files mode enabled, IPC is disabled 00:05:16.112 EAL: No shared files mode enabled, IPC is disabled 00:05:16.112 ************************************ 00:05:16.112 END TEST env_vtophys 00:05:16.112 00:05:16.112 real 0m1.429s 00:05:16.112 user 0m0.787s 00:05:16.112 sys 0m0.512s 00:05:16.112 07:11:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.112 07:11:17 -- common/autotest_common.sh@10 -- # set +x 00:05:16.112 ************************************ 00:05:16.112 07:11:17 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:16.112 07:11:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:16.112 07:11:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:16.112 07:11:17 -- common/autotest_common.sh@10 -- # set +x 00:05:16.112 ************************************ 00:05:16.112 START TEST env_pci 00:05:16.112 ************************************ 00:05:16.112 07:11:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:16.112 00:05:16.112 00:05:16.112 CUnit - A unit testing framework for C - Version 2.1-3 00:05:16.112 http://cunit.sourceforge.net/ 00:05:16.112 00:05:16.112 00:05:16.112 Suite: pci 00:05:16.112 Test: pci_hook ...[2024-11-04 07:11:17.859656] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 67509 has claimed it 00:05:16.112 passed 00:05:16.112 00:05:16.112 Run Summary: Type Total Ran Passed Failed Inactive 00:05:16.112 suites 1 1 n/a 0 0 00:05:16.112 tests 1 1 1 0 0 00:05:16.112 asserts 25 25 25 0 n/a 00:05:16.112 00:05:16.112 Elapsed time = 0.002 seconds 00:05:16.112 EAL: Cannot find device (10000:00:01.0) 00:05:16.112 EAL: Failed to attach device on primary process 00:05:16.112 00:05:16.112 real 0m0.021s 00:05:16.112 user 0m0.012s 00:05:16.112 sys 0m0.008s 00:05:16.112 07:11:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.112 07:11:17 -- common/autotest_common.sh@10 -- # set +x 00:05:16.112 ************************************ 00:05:16.112 END TEST env_pci 00:05:16.112 ************************************ 00:05:16.112 07:11:17 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:16.112 07:11:17 -- env/env.sh@15 -- # uname 00:05:16.112 07:11:17 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:16.112 07:11:17 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:16.112 07:11:17 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:16.112 07:11:17 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:16.112 07:11:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:16.112 07:11:17 -- common/autotest_common.sh@10 -- # set +x 00:05:16.112 ************************************ 00:05:16.112 START TEST env_dpdk_post_init 00:05:16.112 ************************************ 00:05:16.112 07:11:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:16.371 EAL: Detected CPU lcores: 10 00:05:16.371 EAL: Detected NUMA nodes: 1 00:05:16.371 EAL: Detected shared linkage of DPDK 00:05:16.371 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:16.371 EAL: Selected IOVA mode 'PA' 00:05:16.371 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:16.371 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:05:16.371 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:05:16.371 Starting DPDK initialization... 00:05:16.371 Starting SPDK post initialization... 00:05:16.371 SPDK NVMe probe 00:05:16.371 Attaching to 0000:00:06.0 00:05:16.371 Attaching to 0000:00:07.0 00:05:16.371 Attached to 0000:00:06.0 00:05:16.371 Attached to 0000:00:07.0 00:05:16.371 Cleaning up... 00:05:16.371 00:05:16.371 real 0m0.181s 00:05:16.371 user 0m0.047s 00:05:16.371 sys 0m0.035s 00:05:16.371 07:11:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.371 07:11:18 -- common/autotest_common.sh@10 -- # set +x 00:05:16.371 ************************************ 00:05:16.371 END TEST env_dpdk_post_init 00:05:16.371 ************************************ 00:05:16.371 07:11:18 -- env/env.sh@26 -- # uname 00:05:16.371 07:11:18 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:16.371 07:11:18 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:16.371 07:11:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:16.371 07:11:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:16.371 07:11:18 -- common/autotest_common.sh@10 -- # set +x 00:05:16.371 ************************************ 00:05:16.371 START TEST env_mem_callbacks 00:05:16.371 ************************************ 00:05:16.371 07:11:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:16.371 EAL: Detected CPU lcores: 10 00:05:16.371 EAL: Detected NUMA nodes: 1 00:05:16.371 EAL: Detected shared linkage of DPDK 00:05:16.371 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:16.371 EAL: Selected IOVA mode 'PA' 00:05:16.630 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:16.630 00:05:16.630 00:05:16.630 CUnit - A unit testing framework for C - Version 2.1-3 00:05:16.630 http://cunit.sourceforge.net/ 00:05:16.630 00:05:16.630 00:05:16.630 Suite: memory 00:05:16.630 Test: test ... 00:05:16.630 register 0x200000200000 2097152 00:05:16.630 malloc 3145728 00:05:16.630 register 0x200000400000 4194304 00:05:16.630 buf 0x200000500000 len 3145728 PASSED 00:05:16.630 malloc 64 00:05:16.630 buf 0x2000004fff40 len 64 PASSED 00:05:16.630 malloc 4194304 00:05:16.630 register 0x200000800000 6291456 00:05:16.630 buf 0x200000a00000 len 4194304 PASSED 00:05:16.630 free 0x200000500000 3145728 00:05:16.630 free 0x2000004fff40 64 00:05:16.630 unregister 0x200000400000 4194304 PASSED 00:05:16.630 free 0x200000a00000 4194304 00:05:16.630 unregister 0x200000800000 6291456 PASSED 00:05:16.630 malloc 8388608 00:05:16.630 register 0x200000400000 10485760 00:05:16.630 buf 0x200000600000 len 8388608 PASSED 00:05:16.630 free 0x200000600000 8388608 00:05:16.630 unregister 0x200000400000 10485760 PASSED 00:05:16.630 passed 00:05:16.630 00:05:16.630 Run Summary: Type Total Ran Passed Failed Inactive 00:05:16.630 suites 1 1 n/a 0 0 00:05:16.630 tests 1 1 1 0 0 00:05:16.630 asserts 15 15 15 0 n/a 00:05:16.630 00:05:16.630 Elapsed time = 0.009 seconds 00:05:16.630 00:05:16.630 real 0m0.148s 00:05:16.630 user 0m0.015s 00:05:16.630 sys 0m0.031s 00:05:16.630 07:11:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.630 07:11:18 -- common/autotest_common.sh@10 -- # set +x 00:05:16.630 ************************************ 00:05:16.630 END TEST env_mem_callbacks 00:05:16.630 ************************************ 00:05:16.630 00:05:16.630 real 0m2.388s 00:05:16.630 user 0m1.189s 00:05:16.630 sys 0m0.843s 00:05:16.630 07:11:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.630 07:11:18 -- common/autotest_common.sh@10 -- # set +x 00:05:16.630 ************************************ 00:05:16.630 END TEST env 00:05:16.630 ************************************ 00:05:16.630 07:11:18 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:16.630 07:11:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:16.630 07:11:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:16.630 07:11:18 -- common/autotest_common.sh@10 -- # set +x 00:05:16.630 ************************************ 00:05:16.630 START TEST rpc 00:05:16.630 ************************************ 00:05:16.630 07:11:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:16.889 * Looking for test storage... 00:05:16.889 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:16.889 07:11:18 -- rpc/rpc.sh@65 -- # spdk_pid=67617 00:05:16.889 07:11:18 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:16.889 07:11:18 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:16.889 07:11:18 -- rpc/rpc.sh@67 -- # waitforlisten 67617 00:05:16.889 07:11:18 -- common/autotest_common.sh@819 -- # '[' -z 67617 ']' 00:05:16.889 07:11:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.889 07:11:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:16.889 07:11:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.889 07:11:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:16.889 07:11:18 -- common/autotest_common.sh@10 -- # set +x 00:05:16.889 [2024-11-04 07:11:18.579952] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:16.889 [2024-11-04 07:11:18.580296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67617 ] 00:05:16.889 [2024-11-04 07:11:18.720848] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.148 [2024-11-04 07:11:18.781034] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:17.148 [2024-11-04 07:11:18.781187] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:17.148 [2024-11-04 07:11:18.781201] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 67617' to capture a snapshot of events at runtime. 00:05:17.148 [2024-11-04 07:11:18.781209] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid67617 for offline analysis/debug. 00:05:17.148 [2024-11-04 07:11:18.781239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.084 07:11:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:18.084 07:11:19 -- common/autotest_common.sh@852 -- # return 0 00:05:18.084 07:11:19 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:18.084 07:11:19 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:18.084 07:11:19 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:18.084 07:11:19 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:18.084 07:11:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:18.084 07:11:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:18.084 07:11:19 -- common/autotest_common.sh@10 -- # set +x 00:05:18.084 ************************************ 00:05:18.084 START TEST rpc_integrity 00:05:18.084 ************************************ 00:05:18.084 07:11:19 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:18.084 07:11:19 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:18.084 07:11:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:18.084 07:11:19 -- common/autotest_common.sh@10 -- # set +x 00:05:18.084 07:11:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:18.084 07:11:19 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:18.084 07:11:19 -- rpc/rpc.sh@13 -- # jq length 00:05:18.084 07:11:19 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:18.084 07:11:19 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:18.084 07:11:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:18.084 07:11:19 -- common/autotest_common.sh@10 -- # set +x 00:05:18.084 07:11:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:18.084 07:11:19 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:18.084 07:11:19 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:18.084 07:11:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:18.084 07:11:19 -- common/autotest_common.sh@10 -- # set +x 00:05:18.084 07:11:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:18.084 07:11:19 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:18.084 { 00:05:18.084 "aliases": [ 00:05:18.084 "2de60068-d01f-48e3-93d6-dc5906960ec3" 00:05:18.084 ], 00:05:18.084 "assigned_rate_limits": { 00:05:18.084 "r_mbytes_per_sec": 0, 00:05:18.084 "rw_ios_per_sec": 0, 00:05:18.084 "rw_mbytes_per_sec": 0, 00:05:18.084 "w_mbytes_per_sec": 0 00:05:18.084 }, 00:05:18.084 "block_size": 512, 00:05:18.084 "claimed": false, 00:05:18.084 "driver_specific": {}, 00:05:18.084 "memory_domains": [ 00:05:18.084 { 00:05:18.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.084 "dma_device_type": 2 00:05:18.084 } 00:05:18.085 ], 00:05:18.085 "name": "Malloc0", 00:05:18.085 "num_blocks": 16384, 00:05:18.085 "product_name": "Malloc disk", 00:05:18.085 "supported_io_types": { 00:05:18.085 "abort": true, 00:05:18.085 "compare": false, 00:05:18.085 "compare_and_write": false, 00:05:18.085 "flush": true, 00:05:18.085 "nvme_admin": false, 00:05:18.085 "nvme_io": false, 00:05:18.085 "read": true, 00:05:18.085 "reset": true, 00:05:18.085 "unmap": true, 00:05:18.085 "write": true, 00:05:18.085 "write_zeroes": true 00:05:18.085 }, 00:05:18.085 "uuid": "2de60068-d01f-48e3-93d6-dc5906960ec3", 00:05:18.085 "zoned": false 00:05:18.085 } 00:05:18.085 ]' 00:05:18.085 07:11:19 -- rpc/rpc.sh@17 -- # jq length 00:05:18.085 07:11:19 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:18.085 07:11:19 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:18.085 07:11:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:18.085 07:11:19 -- common/autotest_common.sh@10 -- # set +x 00:05:18.085 [2024-11-04 07:11:19.729826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:18.085 [2024-11-04 07:11:19.729871] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:18.085 [2024-11-04 07:11:19.729913] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14d2b60 00:05:18.085 [2024-11-04 07:11:19.729922] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:18.085 [2024-11-04 07:11:19.731343] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:18.085 [2024-11-04 07:11:19.731373] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:18.085 Passthru0 00:05:18.085 07:11:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:18.085 07:11:19 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:18.085 07:11:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:18.085 07:11:19 -- common/autotest_common.sh@10 -- # set +x 00:05:18.085 07:11:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:18.085 07:11:19 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:18.085 { 00:05:18.085 "aliases": [ 00:05:18.085 "2de60068-d01f-48e3-93d6-dc5906960ec3" 00:05:18.085 ], 00:05:18.085 "assigned_rate_limits": { 00:05:18.085 "r_mbytes_per_sec": 0, 00:05:18.085 "rw_ios_per_sec": 0, 00:05:18.085 "rw_mbytes_per_sec": 0, 00:05:18.085 "w_mbytes_per_sec": 0 00:05:18.085 }, 00:05:18.085 "block_size": 512, 00:05:18.085 "claim_type": "exclusive_write", 00:05:18.085 "claimed": true, 00:05:18.085 "driver_specific": {}, 00:05:18.085 "memory_domains": [ 00:05:18.085 { 00:05:18.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.085 "dma_device_type": 2 00:05:18.085 } 00:05:18.085 ], 00:05:18.085 "name": "Malloc0", 00:05:18.085 "num_blocks": 16384, 00:05:18.085 "product_name": "Malloc disk", 00:05:18.085 "supported_io_types": { 00:05:18.085 "abort": true, 00:05:18.085 "compare": false, 00:05:18.085 "compare_and_write": false, 00:05:18.085 "flush": true, 00:05:18.085 "nvme_admin": false, 00:05:18.085 "nvme_io": false, 00:05:18.085 "read": true, 00:05:18.085 "reset": true, 00:05:18.085 "unmap": true, 00:05:18.085 "write": true, 00:05:18.085 "write_zeroes": true 00:05:18.085 }, 00:05:18.085 "uuid": "2de60068-d01f-48e3-93d6-dc5906960ec3", 00:05:18.085 "zoned": false 00:05:18.085 }, 00:05:18.085 { 00:05:18.085 "aliases": [ 00:05:18.085 "2ae757db-214d-5d32-aa0b-15b9367d6b6f" 00:05:18.085 ], 00:05:18.085 "assigned_rate_limits": { 00:05:18.085 "r_mbytes_per_sec": 0, 00:05:18.085 "rw_ios_per_sec": 0, 00:05:18.085 "rw_mbytes_per_sec": 0, 00:05:18.085 "w_mbytes_per_sec": 0 00:05:18.085 }, 00:05:18.085 "block_size": 512, 00:05:18.085 "claimed": false, 00:05:18.085 "driver_specific": { 00:05:18.085 "passthru": { 00:05:18.085 "base_bdev_name": "Malloc0", 00:05:18.085 "name": "Passthru0" 00:05:18.085 } 00:05:18.085 }, 00:05:18.085 "memory_domains": [ 00:05:18.085 { 00:05:18.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.085 "dma_device_type": 2 00:05:18.085 } 00:05:18.085 ], 00:05:18.085 "name": "Passthru0", 00:05:18.085 "num_blocks": 16384, 00:05:18.085 "product_name": "passthru", 00:05:18.085 "supported_io_types": { 00:05:18.085 "abort": true, 00:05:18.085 "compare": false, 00:05:18.085 "compare_and_write": false, 00:05:18.085 "flush": true, 00:05:18.085 "nvme_admin": false, 00:05:18.085 "nvme_io": false, 00:05:18.085 "read": true, 00:05:18.085 "reset": true, 00:05:18.085 "unmap": true, 00:05:18.085 "write": true, 00:05:18.085 "write_zeroes": true 00:05:18.085 }, 00:05:18.085 "uuid": "2ae757db-214d-5d32-aa0b-15b9367d6b6f", 00:05:18.085 "zoned": false 00:05:18.085 } 00:05:18.085 ]' 00:05:18.085 07:11:19 -- rpc/rpc.sh@21 -- # jq length 00:05:18.085 07:11:19 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:18.085 07:11:19 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:18.085 07:11:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:18.085 07:11:19 -- common/autotest_common.sh@10 -- # set +x 00:05:18.085 07:11:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:18.085 07:11:19 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:18.085 07:11:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:18.085 07:11:19 -- common/autotest_common.sh@10 -- # set +x 00:05:18.085 07:11:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:18.085 07:11:19 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:18.085 07:11:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:18.085 07:11:19 -- common/autotest_common.sh@10 -- # set +x 00:05:18.085 07:11:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:18.085 07:11:19 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:18.085 07:11:19 -- rpc/rpc.sh@26 -- # jq length 00:05:18.085 ************************************ 00:05:18.085 END TEST rpc_integrity 00:05:18.085 ************************************ 00:05:18.085 07:11:19 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:18.085 00:05:18.085 real 0m0.310s 00:05:18.085 user 0m0.203s 00:05:18.085 sys 0m0.036s 00:05:18.085 07:11:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.085 07:11:19 -- common/autotest_common.sh@10 -- # set +x 00:05:18.343 07:11:19 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:18.343 07:11:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:18.343 07:11:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:18.343 07:11:19 -- common/autotest_common.sh@10 -- # set +x 00:05:18.343 ************************************ 00:05:18.343 START TEST rpc_plugins 00:05:18.343 ************************************ 00:05:18.344 07:11:19 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:05:18.344 07:11:19 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:18.344 07:11:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:18.344 07:11:19 -- common/autotest_common.sh@10 -- # set +x 00:05:18.344 07:11:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:18.344 07:11:19 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:18.344 07:11:19 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:18.344 07:11:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:18.344 07:11:19 -- common/autotest_common.sh@10 -- # set +x 00:05:18.344 07:11:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:18.344 07:11:19 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:18.344 { 00:05:18.344 "aliases": [ 00:05:18.344 "9f608cc1-a507-480c-9912-8df744559f4d" 00:05:18.344 ], 00:05:18.344 "assigned_rate_limits": { 00:05:18.344 "r_mbytes_per_sec": 0, 00:05:18.344 "rw_ios_per_sec": 0, 00:05:18.344 "rw_mbytes_per_sec": 0, 00:05:18.344 "w_mbytes_per_sec": 0 00:05:18.344 }, 00:05:18.344 "block_size": 4096, 00:05:18.344 "claimed": false, 00:05:18.344 "driver_specific": {}, 00:05:18.344 "memory_domains": [ 00:05:18.344 { 00:05:18.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.344 "dma_device_type": 2 00:05:18.344 } 00:05:18.344 ], 00:05:18.344 "name": "Malloc1", 00:05:18.344 "num_blocks": 256, 00:05:18.344 "product_name": "Malloc disk", 00:05:18.344 "supported_io_types": { 00:05:18.344 "abort": true, 00:05:18.344 "compare": false, 00:05:18.344 "compare_and_write": false, 00:05:18.344 "flush": true, 00:05:18.344 "nvme_admin": false, 00:05:18.344 "nvme_io": false, 00:05:18.344 "read": true, 00:05:18.344 "reset": true, 00:05:18.344 "unmap": true, 00:05:18.344 "write": true, 00:05:18.344 "write_zeroes": true 00:05:18.344 }, 00:05:18.344 "uuid": "9f608cc1-a507-480c-9912-8df744559f4d", 00:05:18.344 "zoned": false 00:05:18.344 } 00:05:18.344 ]' 00:05:18.344 07:11:19 -- rpc/rpc.sh@32 -- # jq length 00:05:18.344 07:11:20 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:18.344 07:11:20 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:18.344 07:11:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:18.344 07:11:20 -- common/autotest_common.sh@10 -- # set +x 00:05:18.344 07:11:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:18.344 07:11:20 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:18.344 07:11:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:18.344 07:11:20 -- common/autotest_common.sh@10 -- # set +x 00:05:18.344 07:11:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:18.344 07:11:20 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:18.344 07:11:20 -- rpc/rpc.sh@36 -- # jq length 00:05:18.344 ************************************ 00:05:18.344 END TEST rpc_plugins 00:05:18.344 ************************************ 00:05:18.344 07:11:20 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:18.344 00:05:18.344 real 0m0.162s 00:05:18.344 user 0m0.105s 00:05:18.344 sys 0m0.018s 00:05:18.344 07:11:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.344 07:11:20 -- common/autotest_common.sh@10 -- # set +x 00:05:18.344 07:11:20 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:18.344 07:11:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:18.344 07:11:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:18.344 07:11:20 -- common/autotest_common.sh@10 -- # set +x 00:05:18.344 ************************************ 00:05:18.344 START TEST rpc_trace_cmd_test 00:05:18.344 ************************************ 00:05:18.344 07:11:20 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:05:18.344 07:11:20 -- rpc/rpc.sh@40 -- # local info 00:05:18.344 07:11:20 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:18.344 07:11:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:18.344 07:11:20 -- common/autotest_common.sh@10 -- # set +x 00:05:18.344 07:11:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:18.344 07:11:20 -- rpc/rpc.sh@42 -- # info='{ 00:05:18.344 "bdev": { 00:05:18.344 "mask": "0x8", 00:05:18.344 "tpoint_mask": "0xffffffffffffffff" 00:05:18.344 }, 00:05:18.344 "bdev_nvme": { 00:05:18.344 "mask": "0x4000", 00:05:18.344 "tpoint_mask": "0x0" 00:05:18.344 }, 00:05:18.344 "blobfs": { 00:05:18.344 "mask": "0x80", 00:05:18.344 "tpoint_mask": "0x0" 00:05:18.344 }, 00:05:18.344 "dsa": { 00:05:18.344 "mask": "0x200", 00:05:18.344 "tpoint_mask": "0x0" 00:05:18.344 }, 00:05:18.344 "ftl": { 00:05:18.344 "mask": "0x40", 00:05:18.344 "tpoint_mask": "0x0" 00:05:18.344 }, 00:05:18.344 "iaa": { 00:05:18.344 "mask": "0x1000", 00:05:18.344 "tpoint_mask": "0x0" 00:05:18.344 }, 00:05:18.344 "iscsi_conn": { 00:05:18.344 "mask": "0x2", 00:05:18.344 "tpoint_mask": "0x0" 00:05:18.344 }, 00:05:18.344 "nvme_pcie": { 00:05:18.344 "mask": "0x800", 00:05:18.344 "tpoint_mask": "0x0" 00:05:18.344 }, 00:05:18.344 "nvme_tcp": { 00:05:18.344 "mask": "0x2000", 00:05:18.344 "tpoint_mask": "0x0" 00:05:18.344 }, 00:05:18.344 "nvmf_rdma": { 00:05:18.344 "mask": "0x10", 00:05:18.344 "tpoint_mask": "0x0" 00:05:18.344 }, 00:05:18.344 "nvmf_tcp": { 00:05:18.344 "mask": "0x20", 00:05:18.344 "tpoint_mask": "0x0" 00:05:18.344 }, 00:05:18.344 "scsi": { 00:05:18.344 "mask": "0x4", 00:05:18.344 "tpoint_mask": "0x0" 00:05:18.344 }, 00:05:18.344 "thread": { 00:05:18.344 "mask": "0x400", 00:05:18.344 "tpoint_mask": "0x0" 00:05:18.344 }, 00:05:18.344 "tpoint_group_mask": "0x8", 00:05:18.344 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid67617" 00:05:18.344 }' 00:05:18.602 07:11:20 -- rpc/rpc.sh@43 -- # jq length 00:05:18.602 07:11:20 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:18.602 07:11:20 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:18.602 07:11:20 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:18.602 07:11:20 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:18.602 07:11:20 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:18.602 07:11:20 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:18.602 07:11:20 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:18.602 07:11:20 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:18.871 ************************************ 00:05:18.871 END TEST rpc_trace_cmd_test 00:05:18.871 ************************************ 00:05:18.871 07:11:20 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:18.871 00:05:18.871 real 0m0.291s 00:05:18.871 user 0m0.258s 00:05:18.871 sys 0m0.022s 00:05:18.871 07:11:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.871 07:11:20 -- common/autotest_common.sh@10 -- # set +x 00:05:18.871 07:11:20 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:05:18.871 07:11:20 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:05:18.871 07:11:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:18.871 07:11:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:18.871 07:11:20 -- common/autotest_common.sh@10 -- # set +x 00:05:18.871 ************************************ 00:05:18.871 START TEST go_rpc 00:05:18.871 ************************************ 00:05:18.871 07:11:20 -- common/autotest_common.sh@1104 -- # go_rpc 00:05:18.871 07:11:20 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:18.871 07:11:20 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:05:18.871 07:11:20 -- rpc/rpc.sh@52 -- # jq length 00:05:18.871 07:11:20 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:05:18.871 07:11:20 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:05:18.871 07:11:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:18.871 07:11:20 -- common/autotest_common.sh@10 -- # set +x 00:05:18.871 07:11:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:18.871 07:11:20 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:05:18.871 07:11:20 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:18.872 07:11:20 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["f9c3bcbb-fb14-45a8-aa00-a5dc061269b2"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"f9c3bcbb-fb14-45a8-aa00-a5dc061269b2","zoned":false}]' 00:05:18.872 07:11:20 -- rpc/rpc.sh@57 -- # jq length 00:05:18.872 07:11:20 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:05:18.872 07:11:20 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:18.872 07:11:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:18.872 07:11:20 -- common/autotest_common.sh@10 -- # set +x 00:05:18.872 07:11:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:18.872 07:11:20 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:18.872 07:11:20 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:05:18.872 07:11:20 -- rpc/rpc.sh@61 -- # jq length 00:05:19.163 07:11:20 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:05:19.163 00:05:19.163 real 0m0.218s 00:05:19.163 user 0m0.149s 00:05:19.163 sys 0m0.037s 00:05:19.163 07:11:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.163 07:11:20 -- common/autotest_common.sh@10 -- # set +x 00:05:19.163 ************************************ 00:05:19.163 END TEST go_rpc 00:05:19.163 ************************************ 00:05:19.163 07:11:20 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:19.163 07:11:20 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:19.163 07:11:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:19.163 07:11:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.163 07:11:20 -- common/autotest_common.sh@10 -- # set +x 00:05:19.163 ************************************ 00:05:19.163 START TEST rpc_daemon_integrity 00:05:19.163 ************************************ 00:05:19.163 07:11:20 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:19.163 07:11:20 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:19.163 07:11:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:19.163 07:11:20 -- common/autotest_common.sh@10 -- # set +x 00:05:19.163 07:11:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:19.163 07:11:20 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:19.163 07:11:20 -- rpc/rpc.sh@13 -- # jq length 00:05:19.163 07:11:20 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:19.163 07:11:20 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:19.163 07:11:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:19.163 07:11:20 -- common/autotest_common.sh@10 -- # set +x 00:05:19.163 07:11:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:19.163 07:11:20 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:05:19.163 07:11:20 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:19.163 07:11:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:19.163 07:11:20 -- common/autotest_common.sh@10 -- # set +x 00:05:19.163 07:11:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:19.163 07:11:20 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:19.163 { 00:05:19.163 "aliases": [ 00:05:19.163 "63b6c4ac-84b9-45dd-b15a-cda833c91607" 00:05:19.163 ], 00:05:19.163 "assigned_rate_limits": { 00:05:19.163 "r_mbytes_per_sec": 0, 00:05:19.163 "rw_ios_per_sec": 0, 00:05:19.163 "rw_mbytes_per_sec": 0, 00:05:19.163 "w_mbytes_per_sec": 0 00:05:19.163 }, 00:05:19.163 "block_size": 512, 00:05:19.163 "claimed": false, 00:05:19.163 "driver_specific": {}, 00:05:19.163 "memory_domains": [ 00:05:19.163 { 00:05:19.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.163 "dma_device_type": 2 00:05:19.163 } 00:05:19.163 ], 00:05:19.163 "name": "Malloc3", 00:05:19.163 "num_blocks": 16384, 00:05:19.163 "product_name": "Malloc disk", 00:05:19.163 "supported_io_types": { 00:05:19.163 "abort": true, 00:05:19.163 "compare": false, 00:05:19.163 "compare_and_write": false, 00:05:19.163 "flush": true, 00:05:19.163 "nvme_admin": false, 00:05:19.163 "nvme_io": false, 00:05:19.163 "read": true, 00:05:19.163 "reset": true, 00:05:19.163 "unmap": true, 00:05:19.163 "write": true, 00:05:19.163 "write_zeroes": true 00:05:19.163 }, 00:05:19.163 "uuid": "63b6c4ac-84b9-45dd-b15a-cda833c91607", 00:05:19.163 "zoned": false 00:05:19.163 } 00:05:19.163 ]' 00:05:19.163 07:11:20 -- rpc/rpc.sh@17 -- # jq length 00:05:19.163 07:11:20 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:19.163 07:11:20 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:05:19.164 07:11:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:19.164 07:11:20 -- common/autotest_common.sh@10 -- # set +x 00:05:19.164 [2024-11-04 07:11:20.922316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:19.164 [2024-11-04 07:11:20.922356] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:19.164 [2024-11-04 07:11:20.922372] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14d4990 00:05:19.164 [2024-11-04 07:11:20.922381] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:19.164 [2024-11-04 07:11:20.923651] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:19.164 [2024-11-04 07:11:20.923712] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:19.164 Passthru0 00:05:19.164 07:11:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:19.164 07:11:20 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:19.164 07:11:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:19.164 07:11:20 -- common/autotest_common.sh@10 -- # set +x 00:05:19.164 07:11:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:19.164 07:11:20 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:19.164 { 00:05:19.164 "aliases": [ 00:05:19.164 "63b6c4ac-84b9-45dd-b15a-cda833c91607" 00:05:19.164 ], 00:05:19.164 "assigned_rate_limits": { 00:05:19.164 "r_mbytes_per_sec": 0, 00:05:19.164 "rw_ios_per_sec": 0, 00:05:19.164 "rw_mbytes_per_sec": 0, 00:05:19.164 "w_mbytes_per_sec": 0 00:05:19.164 }, 00:05:19.164 "block_size": 512, 00:05:19.164 "claim_type": "exclusive_write", 00:05:19.164 "claimed": true, 00:05:19.164 "driver_specific": {}, 00:05:19.164 "memory_domains": [ 00:05:19.164 { 00:05:19.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.164 "dma_device_type": 2 00:05:19.164 } 00:05:19.164 ], 00:05:19.164 "name": "Malloc3", 00:05:19.164 "num_blocks": 16384, 00:05:19.164 "product_name": "Malloc disk", 00:05:19.164 "supported_io_types": { 00:05:19.164 "abort": true, 00:05:19.164 "compare": false, 00:05:19.164 "compare_and_write": false, 00:05:19.164 "flush": true, 00:05:19.164 "nvme_admin": false, 00:05:19.164 "nvme_io": false, 00:05:19.164 "read": true, 00:05:19.164 "reset": true, 00:05:19.164 "unmap": true, 00:05:19.164 "write": true, 00:05:19.164 "write_zeroes": true 00:05:19.164 }, 00:05:19.164 "uuid": "63b6c4ac-84b9-45dd-b15a-cda833c91607", 00:05:19.164 "zoned": false 00:05:19.164 }, 00:05:19.164 { 00:05:19.164 "aliases": [ 00:05:19.164 "3986a33a-9e9e-50a9-ba87-9768591a98f3" 00:05:19.164 ], 00:05:19.164 "assigned_rate_limits": { 00:05:19.164 "r_mbytes_per_sec": 0, 00:05:19.164 "rw_ios_per_sec": 0, 00:05:19.164 "rw_mbytes_per_sec": 0, 00:05:19.164 "w_mbytes_per_sec": 0 00:05:19.164 }, 00:05:19.164 "block_size": 512, 00:05:19.164 "claimed": false, 00:05:19.164 "driver_specific": { 00:05:19.164 "passthru": { 00:05:19.164 "base_bdev_name": "Malloc3", 00:05:19.164 "name": "Passthru0" 00:05:19.164 } 00:05:19.164 }, 00:05:19.164 "memory_domains": [ 00:05:19.164 { 00:05:19.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.164 "dma_device_type": 2 00:05:19.164 } 00:05:19.164 ], 00:05:19.164 "name": "Passthru0", 00:05:19.164 "num_blocks": 16384, 00:05:19.164 "product_name": "passthru", 00:05:19.164 "supported_io_types": { 00:05:19.164 "abort": true, 00:05:19.164 "compare": false, 00:05:19.164 "compare_and_write": false, 00:05:19.164 "flush": true, 00:05:19.164 "nvme_admin": false, 00:05:19.164 "nvme_io": false, 00:05:19.164 "read": true, 00:05:19.164 "reset": true, 00:05:19.164 "unmap": true, 00:05:19.164 "write": true, 00:05:19.164 "write_zeroes": true 00:05:19.164 }, 00:05:19.164 "uuid": "3986a33a-9e9e-50a9-ba87-9768591a98f3", 00:05:19.164 "zoned": false 00:05:19.164 } 00:05:19.164 ]' 00:05:19.164 07:11:20 -- rpc/rpc.sh@21 -- # jq length 00:05:19.436 07:11:21 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:19.436 07:11:21 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:19.436 07:11:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:19.436 07:11:21 -- common/autotest_common.sh@10 -- # set +x 00:05:19.436 07:11:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:19.436 07:11:21 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:05:19.436 07:11:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:19.436 07:11:21 -- common/autotest_common.sh@10 -- # set +x 00:05:19.436 07:11:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:19.436 07:11:21 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:19.436 07:11:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:19.436 07:11:21 -- common/autotest_common.sh@10 -- # set +x 00:05:19.436 07:11:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:19.436 07:11:21 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:19.436 07:11:21 -- rpc/rpc.sh@26 -- # jq length 00:05:19.436 07:11:21 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:19.436 00:05:19.436 real 0m0.323s 00:05:19.436 user 0m0.225s 00:05:19.436 sys 0m0.029s 00:05:19.436 07:11:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.436 ************************************ 00:05:19.436 07:11:21 -- common/autotest_common.sh@10 -- # set +x 00:05:19.436 END TEST rpc_daemon_integrity 00:05:19.436 ************************************ 00:05:19.436 07:11:21 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:19.436 07:11:21 -- rpc/rpc.sh@84 -- # killprocess 67617 00:05:19.436 07:11:21 -- common/autotest_common.sh@926 -- # '[' -z 67617 ']' 00:05:19.436 07:11:21 -- common/autotest_common.sh@930 -- # kill -0 67617 00:05:19.436 07:11:21 -- common/autotest_common.sh@931 -- # uname 00:05:19.436 07:11:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:19.436 07:11:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67617 00:05:19.436 07:11:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:19.436 07:11:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:19.436 killing process with pid 67617 00:05:19.436 07:11:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67617' 00:05:19.436 07:11:21 -- common/autotest_common.sh@945 -- # kill 67617 00:05:19.436 07:11:21 -- common/autotest_common.sh@950 -- # wait 67617 00:05:19.695 00:05:19.695 real 0m3.083s 00:05:19.695 user 0m4.164s 00:05:19.695 sys 0m0.714s 00:05:19.695 07:11:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.695 ************************************ 00:05:19.695 END TEST rpc 00:05:19.695 07:11:21 -- common/autotest_common.sh@10 -- # set +x 00:05:19.695 ************************************ 00:05:19.954 07:11:21 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:19.954 07:11:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:19.954 07:11:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.954 07:11:21 -- common/autotest_common.sh@10 -- # set +x 00:05:19.954 ************************************ 00:05:19.954 START TEST rpc_client 00:05:19.954 ************************************ 00:05:19.954 07:11:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:19.954 * Looking for test storage... 00:05:19.954 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:19.954 07:11:21 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:19.954 OK 00:05:19.954 07:11:21 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:19.954 00:05:19.954 real 0m0.103s 00:05:19.954 user 0m0.040s 00:05:19.954 sys 0m0.068s 00:05:19.954 07:11:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.954 07:11:21 -- common/autotest_common.sh@10 -- # set +x 00:05:19.954 ************************************ 00:05:19.954 END TEST rpc_client 00:05:19.954 ************************************ 00:05:19.954 07:11:21 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:19.954 07:11:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:19.954 07:11:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.954 07:11:21 -- common/autotest_common.sh@10 -- # set +x 00:05:19.954 ************************************ 00:05:19.954 START TEST json_config 00:05:19.954 ************************************ 00:05:19.954 07:11:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:19.954 07:11:21 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:19.954 07:11:21 -- nvmf/common.sh@7 -- # uname -s 00:05:19.954 07:11:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:19.954 07:11:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:19.954 07:11:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:19.954 07:11:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:19.954 07:11:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:19.954 07:11:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:19.954 07:11:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:19.954 07:11:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:19.954 07:11:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:19.954 07:11:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:19.954 07:11:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:05:19.954 07:11:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:05:19.954 07:11:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:19.954 07:11:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:19.954 07:11:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:19.954 07:11:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:19.954 07:11:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:19.954 07:11:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:19.954 07:11:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:19.954 07:11:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.954 07:11:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.954 07:11:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.954 07:11:21 -- paths/export.sh@5 -- # export PATH 00:05:19.954 07:11:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.954 07:11:21 -- nvmf/common.sh@46 -- # : 0 00:05:19.954 07:11:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:19.954 07:11:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:19.954 07:11:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:19.954 07:11:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:19.954 07:11:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:19.954 07:11:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:19.954 07:11:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:19.954 07:11:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:19.954 07:11:21 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:19.954 07:11:21 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:19.954 07:11:21 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:19.954 07:11:21 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:19.954 07:11:21 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:19.954 07:11:21 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:19.954 07:11:21 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:19.954 07:11:21 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:19.954 07:11:21 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:19.954 07:11:21 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:19.954 07:11:21 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:19.954 07:11:21 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:19.954 07:11:21 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:19.954 07:11:21 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:19.954 INFO: JSON configuration test init 00:05:19.954 07:11:21 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:19.954 07:11:21 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:19.954 07:11:21 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:19.954 07:11:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:19.954 07:11:21 -- common/autotest_common.sh@10 -- # set +x 00:05:20.213 07:11:21 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:20.213 07:11:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:20.213 07:11:21 -- common/autotest_common.sh@10 -- # set +x 00:05:20.213 07:11:21 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:20.213 07:11:21 -- json_config/json_config.sh@98 -- # local app=target 00:05:20.213 07:11:21 -- json_config/json_config.sh@99 -- # shift 00:05:20.214 07:11:21 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:20.214 07:11:21 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:20.214 07:11:21 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:20.214 07:11:21 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:20.214 07:11:21 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:20.214 07:11:21 -- json_config/json_config.sh@111 -- # app_pid[$app]=67923 00:05:20.214 07:11:21 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:20.214 Waiting for target to run... 00:05:20.214 07:11:21 -- json_config/json_config.sh@114 -- # waitforlisten 67923 /var/tmp/spdk_tgt.sock 00:05:20.214 07:11:21 -- common/autotest_common.sh@819 -- # '[' -z 67923 ']' 00:05:20.214 07:11:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:20.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:20.214 07:11:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:20.214 07:11:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:20.214 07:11:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:20.214 07:11:21 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:20.214 07:11:21 -- common/autotest_common.sh@10 -- # set +x 00:05:20.214 [2024-11-04 07:11:21.865769] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:20.214 [2024-11-04 07:11:21.865870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67923 ] 00:05:20.472 [2024-11-04 07:11:22.311459] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.731 [2024-11-04 07:11:22.358807] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:20.731 [2024-11-04 07:11:22.358956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.297 07:11:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:21.297 07:11:22 -- common/autotest_common.sh@852 -- # return 0 00:05:21.297 00:05:21.297 07:11:22 -- json_config/json_config.sh@115 -- # echo '' 00:05:21.297 07:11:22 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:21.297 07:11:22 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:21.297 07:11:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:21.297 07:11:22 -- common/autotest_common.sh@10 -- # set +x 00:05:21.297 07:11:22 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:21.297 07:11:22 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:21.297 07:11:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:21.297 07:11:22 -- common/autotest_common.sh@10 -- # set +x 00:05:21.297 07:11:22 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:21.297 07:11:22 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:21.297 07:11:22 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:21.865 07:11:23 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:21.865 07:11:23 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:21.865 07:11:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:21.865 07:11:23 -- common/autotest_common.sh@10 -- # set +x 00:05:21.865 07:11:23 -- json_config/json_config.sh@48 -- # local ret=0 00:05:21.865 07:11:23 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:21.865 07:11:23 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:21.865 07:11:23 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:21.865 07:11:23 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:21.865 07:11:23 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:21.865 07:11:23 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:21.865 07:11:23 -- json_config/json_config.sh@51 -- # local get_types 00:05:21.865 07:11:23 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:21.865 07:11:23 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:21.865 07:11:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:21.865 07:11:23 -- common/autotest_common.sh@10 -- # set +x 00:05:22.123 07:11:23 -- json_config/json_config.sh@58 -- # return 0 00:05:22.123 07:11:23 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:22.123 07:11:23 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:22.123 07:11:23 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:22.123 07:11:23 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:22.123 07:11:23 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:22.123 07:11:23 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:22.123 07:11:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:22.123 07:11:23 -- common/autotest_common.sh@10 -- # set +x 00:05:22.123 07:11:23 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:22.123 07:11:23 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:22.123 07:11:23 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:22.123 07:11:23 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:22.123 07:11:23 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:22.382 MallocForNvmf0 00:05:22.382 07:11:24 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:22.382 07:11:24 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:22.640 MallocForNvmf1 00:05:22.640 07:11:24 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:22.640 07:11:24 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:22.899 [2024-11-04 07:11:24.497526] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:22.899 07:11:24 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:22.899 07:11:24 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:23.157 07:11:24 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:23.157 07:11:24 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:23.157 07:11:24 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:23.157 07:11:24 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:23.414 07:11:25 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:23.414 07:11:25 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:23.672 [2024-11-04 07:11:25.417980] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:23.672 07:11:25 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:23.672 07:11:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:23.672 07:11:25 -- common/autotest_common.sh@10 -- # set +x 00:05:23.672 07:11:25 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:23.672 07:11:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:23.672 07:11:25 -- common/autotest_common.sh@10 -- # set +x 00:05:23.930 07:11:25 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:23.930 07:11:25 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:23.930 07:11:25 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:24.189 MallocBdevForConfigChangeCheck 00:05:24.189 07:11:25 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:24.189 07:11:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:24.189 07:11:25 -- common/autotest_common.sh@10 -- # set +x 00:05:24.189 07:11:25 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:24.189 07:11:25 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:24.447 INFO: shutting down applications... 00:05:24.447 07:11:26 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:24.447 07:11:26 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:24.447 07:11:26 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:24.447 07:11:26 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:24.447 07:11:26 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:25.013 Calling clear_iscsi_subsystem 00:05:25.013 Calling clear_nvmf_subsystem 00:05:25.013 Calling clear_nbd_subsystem 00:05:25.013 Calling clear_ublk_subsystem 00:05:25.013 Calling clear_vhost_blk_subsystem 00:05:25.013 Calling clear_vhost_scsi_subsystem 00:05:25.013 Calling clear_scheduler_subsystem 00:05:25.013 Calling clear_bdev_subsystem 00:05:25.013 Calling clear_accel_subsystem 00:05:25.013 Calling clear_vmd_subsystem 00:05:25.013 Calling clear_sock_subsystem 00:05:25.013 Calling clear_iobuf_subsystem 00:05:25.013 07:11:26 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:25.013 07:11:26 -- json_config/json_config.sh@396 -- # count=100 00:05:25.013 07:11:26 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:25.013 07:11:26 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:25.013 07:11:26 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:25.013 07:11:26 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:25.271 07:11:27 -- json_config/json_config.sh@398 -- # break 00:05:25.271 07:11:27 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:25.271 07:11:27 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:25.271 07:11:27 -- json_config/json_config.sh@120 -- # local app=target 00:05:25.271 07:11:27 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:25.271 07:11:27 -- json_config/json_config.sh@124 -- # [[ -n 67923 ]] 00:05:25.271 07:11:27 -- json_config/json_config.sh@127 -- # kill -SIGINT 67923 00:05:25.271 07:11:27 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:25.271 07:11:27 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:25.271 07:11:27 -- json_config/json_config.sh@130 -- # kill -0 67923 00:05:25.271 07:11:27 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:25.838 07:11:27 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:25.838 07:11:27 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:25.838 07:11:27 -- json_config/json_config.sh@130 -- # kill -0 67923 00:05:25.838 07:11:27 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:25.838 07:11:27 -- json_config/json_config.sh@132 -- # break 00:05:25.838 07:11:27 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:25.838 SPDK target shutdown done 00:05:25.838 07:11:27 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:25.838 INFO: relaunching applications... 00:05:25.839 07:11:27 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:25.839 07:11:27 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:25.839 07:11:27 -- json_config/json_config.sh@98 -- # local app=target 00:05:25.839 07:11:27 -- json_config/json_config.sh@99 -- # shift 00:05:25.839 07:11:27 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:25.839 07:11:27 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:25.839 07:11:27 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:25.839 07:11:27 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:25.839 07:11:27 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:25.839 07:11:27 -- json_config/json_config.sh@111 -- # app_pid[$app]=68192 00:05:25.839 07:11:27 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:25.839 07:11:27 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:25.839 Waiting for target to run... 00:05:25.839 07:11:27 -- json_config/json_config.sh@114 -- # waitforlisten 68192 /var/tmp/spdk_tgt.sock 00:05:25.839 07:11:27 -- common/autotest_common.sh@819 -- # '[' -z 68192 ']' 00:05:25.839 07:11:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:25.839 07:11:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:25.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:25.839 07:11:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:25.839 07:11:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:25.839 07:11:27 -- common/autotest_common.sh@10 -- # set +x 00:05:25.839 [2024-11-04 07:11:27.566497] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:25.839 [2024-11-04 07:11:27.566608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68192 ] 00:05:26.406 [2024-11-04 07:11:27.991272] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.406 [2024-11-04 07:11:28.041500] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:26.406 [2024-11-04 07:11:28.041645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.664 [2024-11-04 07:11:28.335961] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:26.664 [2024-11-04 07:11:28.368074] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:26.923 07:11:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:26.923 07:11:28 -- common/autotest_common.sh@852 -- # return 0 00:05:26.923 00:05:26.923 07:11:28 -- json_config/json_config.sh@115 -- # echo '' 00:05:26.923 07:11:28 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:26.923 INFO: Checking if target configuration is the same... 00:05:26.923 07:11:28 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:26.923 07:11:28 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:26.923 07:11:28 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:26.923 07:11:28 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:26.923 + '[' 2 -ne 2 ']' 00:05:26.923 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:26.923 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:26.923 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:26.923 +++ basename /dev/fd/62 00:05:26.923 ++ mktemp /tmp/62.XXX 00:05:26.923 + tmp_file_1=/tmp/62.jzq 00:05:26.923 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:26.923 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:26.923 + tmp_file_2=/tmp/spdk_tgt_config.json.ckU 00:05:26.923 + ret=0 00:05:26.923 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:27.181 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:27.181 + diff -u /tmp/62.jzq /tmp/spdk_tgt_config.json.ckU 00:05:27.181 INFO: JSON config files are the same 00:05:27.181 + echo 'INFO: JSON config files are the same' 00:05:27.181 + rm /tmp/62.jzq /tmp/spdk_tgt_config.json.ckU 00:05:27.181 + exit 0 00:05:27.181 07:11:28 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:27.181 INFO: changing configuration and checking if this can be detected... 00:05:27.181 07:11:28 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:27.181 07:11:28 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:27.181 07:11:28 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:27.748 07:11:29 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:27.748 07:11:29 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:27.748 07:11:29 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:27.748 + '[' 2 -ne 2 ']' 00:05:27.748 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:27.748 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:27.748 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:27.748 +++ basename /dev/fd/62 00:05:27.748 ++ mktemp /tmp/62.XXX 00:05:27.748 + tmp_file_1=/tmp/62.0Z4 00:05:27.748 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:27.748 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:27.748 + tmp_file_2=/tmp/spdk_tgt_config.json.NSg 00:05:27.748 + ret=0 00:05:27.748 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:28.005 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:28.005 + diff -u /tmp/62.0Z4 /tmp/spdk_tgt_config.json.NSg 00:05:28.005 + ret=1 00:05:28.005 + echo '=== Start of file: /tmp/62.0Z4 ===' 00:05:28.005 + cat /tmp/62.0Z4 00:05:28.005 + echo '=== End of file: /tmp/62.0Z4 ===' 00:05:28.005 + echo '' 00:05:28.005 + echo '=== Start of file: /tmp/spdk_tgt_config.json.NSg ===' 00:05:28.005 + cat /tmp/spdk_tgt_config.json.NSg 00:05:28.005 + echo '=== End of file: /tmp/spdk_tgt_config.json.NSg ===' 00:05:28.005 + echo '' 00:05:28.005 + rm /tmp/62.0Z4 /tmp/spdk_tgt_config.json.NSg 00:05:28.005 + exit 1 00:05:28.005 INFO: configuration change detected. 00:05:28.005 07:11:29 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:28.005 07:11:29 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:28.005 07:11:29 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:28.005 07:11:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:28.005 07:11:29 -- common/autotest_common.sh@10 -- # set +x 00:05:28.005 07:11:29 -- json_config/json_config.sh@360 -- # local ret=0 00:05:28.005 07:11:29 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:28.005 07:11:29 -- json_config/json_config.sh@370 -- # [[ -n 68192 ]] 00:05:28.005 07:11:29 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:28.005 07:11:29 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:28.005 07:11:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:28.005 07:11:29 -- common/autotest_common.sh@10 -- # set +x 00:05:28.005 07:11:29 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:28.005 07:11:29 -- json_config/json_config.sh@246 -- # uname -s 00:05:28.005 07:11:29 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:28.005 07:11:29 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:28.005 07:11:29 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:28.005 07:11:29 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:28.005 07:11:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:28.005 07:11:29 -- common/autotest_common.sh@10 -- # set +x 00:05:28.005 07:11:29 -- json_config/json_config.sh@376 -- # killprocess 68192 00:05:28.005 07:11:29 -- common/autotest_common.sh@926 -- # '[' -z 68192 ']' 00:05:28.005 07:11:29 -- common/autotest_common.sh@930 -- # kill -0 68192 00:05:28.005 07:11:29 -- common/autotest_common.sh@931 -- # uname 00:05:28.005 07:11:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:28.005 07:11:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68192 00:05:28.005 07:11:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:28.005 07:11:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:28.005 killing process with pid 68192 00:05:28.005 07:11:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68192' 00:05:28.005 07:11:29 -- common/autotest_common.sh@945 -- # kill 68192 00:05:28.005 07:11:29 -- common/autotest_common.sh@950 -- # wait 68192 00:05:28.263 07:11:30 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:28.263 07:11:30 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:28.263 07:11:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:28.263 07:11:30 -- common/autotest_common.sh@10 -- # set +x 00:05:28.263 07:11:30 -- json_config/json_config.sh@381 -- # return 0 00:05:28.263 INFO: Success 00:05:28.263 07:11:30 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:28.263 00:05:28.263 real 0m8.379s 00:05:28.263 user 0m11.975s 00:05:28.263 sys 0m1.855s 00:05:28.263 07:11:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.263 07:11:30 -- common/autotest_common.sh@10 -- # set +x 00:05:28.263 ************************************ 00:05:28.263 END TEST json_config 00:05:28.263 ************************************ 00:05:28.521 07:11:30 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:28.521 07:11:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:28.521 07:11:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:28.521 07:11:30 -- common/autotest_common.sh@10 -- # set +x 00:05:28.521 ************************************ 00:05:28.521 START TEST json_config_extra_key 00:05:28.521 ************************************ 00:05:28.521 07:11:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:28.521 07:11:30 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:28.521 07:11:30 -- nvmf/common.sh@7 -- # uname -s 00:05:28.521 07:11:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:28.521 07:11:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:28.521 07:11:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:28.521 07:11:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:28.521 07:11:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:28.521 07:11:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:28.521 07:11:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:28.521 07:11:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:28.522 07:11:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:28.522 07:11:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:28.522 07:11:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:05:28.522 07:11:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:05:28.522 07:11:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:28.522 07:11:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:28.522 07:11:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:28.522 07:11:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:28.522 07:11:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:28.522 07:11:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:28.522 07:11:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:28.522 07:11:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.522 07:11:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.522 07:11:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.522 07:11:30 -- paths/export.sh@5 -- # export PATH 00:05:28.522 07:11:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.522 07:11:30 -- nvmf/common.sh@46 -- # : 0 00:05:28.522 07:11:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:28.522 07:11:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:28.522 07:11:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:28.522 07:11:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:28.522 07:11:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:28.522 07:11:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:28.522 07:11:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:28.522 07:11:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:28.522 07:11:30 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:28.522 07:11:30 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:28.522 07:11:30 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:28.522 07:11:30 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:28.522 07:11:30 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:28.522 07:11:30 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:28.522 07:11:30 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:28.522 07:11:30 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:28.522 07:11:30 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:28.522 INFO: launching applications... 00:05:28.522 07:11:30 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:28.522 07:11:30 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:28.522 07:11:30 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:28.522 07:11:30 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:28.522 07:11:30 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:28.522 07:11:30 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:28.522 07:11:30 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=68367 00:05:28.522 Waiting for target to run... 00:05:28.522 07:11:30 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:28.522 07:11:30 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 68367 /var/tmp/spdk_tgt.sock 00:05:28.522 07:11:30 -- common/autotest_common.sh@819 -- # '[' -z 68367 ']' 00:05:28.522 07:11:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:28.522 07:11:30 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:28.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:28.522 07:11:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:28.522 07:11:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:28.522 07:11:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:28.522 07:11:30 -- common/autotest_common.sh@10 -- # set +x 00:05:28.522 [2024-11-04 07:11:30.273326] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:28.522 [2024-11-04 07:11:30.273418] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68367 ] 00:05:29.089 [2024-11-04 07:11:30.681866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.089 [2024-11-04 07:11:30.730135] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:29.089 [2024-11-04 07:11:30.730312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.656 07:11:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:29.656 07:11:31 -- common/autotest_common.sh@852 -- # return 0 00:05:29.656 00:05:29.656 07:11:31 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:29.656 INFO: shutting down applications... 00:05:29.656 07:11:31 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:29.656 07:11:31 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:29.656 07:11:31 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:29.656 07:11:31 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:29.656 07:11:31 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 68367 ]] 00:05:29.656 07:11:31 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 68367 00:05:29.656 07:11:31 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:29.656 07:11:31 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:29.656 07:11:31 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68367 00:05:29.656 07:11:31 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:30.223 07:11:31 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:30.223 07:11:31 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:30.223 07:11:31 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68367 00:05:30.223 07:11:31 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:30.223 07:11:31 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:30.223 07:11:31 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:30.223 SPDK target shutdown done 00:05:30.223 07:11:31 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:30.223 Success 00:05:30.223 07:11:31 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:30.223 00:05:30.223 real 0m1.669s 00:05:30.223 user 0m1.563s 00:05:30.223 sys 0m0.417s 00:05:30.223 07:11:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.223 07:11:31 -- common/autotest_common.sh@10 -- # set +x 00:05:30.223 ************************************ 00:05:30.223 END TEST json_config_extra_key 00:05:30.223 ************************************ 00:05:30.223 07:11:31 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:30.223 07:11:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:30.223 07:11:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:30.223 07:11:31 -- common/autotest_common.sh@10 -- # set +x 00:05:30.223 ************************************ 00:05:30.223 START TEST alias_rpc 00:05:30.223 ************************************ 00:05:30.223 07:11:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:30.223 * Looking for test storage... 00:05:30.223 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:30.223 07:11:31 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:30.223 07:11:31 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=68442 00:05:30.223 07:11:31 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 68442 00:05:30.223 07:11:31 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.223 07:11:31 -- common/autotest_common.sh@819 -- # '[' -z 68442 ']' 00:05:30.223 07:11:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.223 07:11:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:30.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.223 07:11:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.223 07:11:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:30.223 07:11:31 -- common/autotest_common.sh@10 -- # set +x 00:05:30.223 [2024-11-04 07:11:32.005794] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:30.223 [2024-11-04 07:11:32.005927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68442 ] 00:05:30.482 [2024-11-04 07:11:32.136250] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.482 [2024-11-04 07:11:32.196519] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:30.482 [2024-11-04 07:11:32.196652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.417 07:11:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:31.417 07:11:32 -- common/autotest_common.sh@852 -- # return 0 00:05:31.417 07:11:32 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:31.417 07:11:33 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 68442 00:05:31.417 07:11:33 -- common/autotest_common.sh@926 -- # '[' -z 68442 ']' 00:05:31.417 07:11:33 -- common/autotest_common.sh@930 -- # kill -0 68442 00:05:31.417 07:11:33 -- common/autotest_common.sh@931 -- # uname 00:05:31.417 07:11:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:31.417 07:11:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68442 00:05:31.675 07:11:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:31.675 07:11:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:31.675 killing process with pid 68442 00:05:31.675 07:11:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68442' 00:05:31.675 07:11:33 -- common/autotest_common.sh@945 -- # kill 68442 00:05:31.675 07:11:33 -- common/autotest_common.sh@950 -- # wait 68442 00:05:31.933 00:05:31.933 real 0m1.746s 00:05:31.934 user 0m2.020s 00:05:31.934 sys 0m0.419s 00:05:31.934 07:11:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.934 07:11:33 -- common/autotest_common.sh@10 -- # set +x 00:05:31.934 ************************************ 00:05:31.934 END TEST alias_rpc 00:05:31.934 ************************************ 00:05:31.934 07:11:33 -- spdk/autotest.sh@182 -- # [[ 1 -eq 0 ]] 00:05:31.934 07:11:33 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:31.934 07:11:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:31.934 07:11:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:31.934 07:11:33 -- common/autotest_common.sh@10 -- # set +x 00:05:31.934 ************************************ 00:05:31.934 START TEST dpdk_mem_utility 00:05:31.934 ************************************ 00:05:31.934 07:11:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:31.934 * Looking for test storage... 00:05:31.934 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:31.934 07:11:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:31.934 07:11:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=68528 00:05:31.934 07:11:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.934 07:11:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 68528 00:05:31.934 07:11:33 -- common/autotest_common.sh@819 -- # '[' -z 68528 ']' 00:05:31.934 07:11:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.934 07:11:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:31.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.934 07:11:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.934 07:11:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:31.934 07:11:33 -- common/autotest_common.sh@10 -- # set +x 00:05:32.192 [2024-11-04 07:11:33.810588] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:32.192 [2024-11-04 07:11:33.810700] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68528 ] 00:05:32.192 [2024-11-04 07:11:33.950917] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.451 [2024-11-04 07:11:34.032702] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:32.451 [2024-11-04 07:11:34.032909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.018 07:11:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:33.018 07:11:34 -- common/autotest_common.sh@852 -- # return 0 00:05:33.018 07:11:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:33.018 07:11:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:33.018 07:11:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:33.018 07:11:34 -- common/autotest_common.sh@10 -- # set +x 00:05:33.018 { 00:05:33.018 "filename": "/tmp/spdk_mem_dump.txt" 00:05:33.018 } 00:05:33.018 07:11:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:33.018 07:11:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:33.278 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:33.278 1 heaps totaling size 814.000000 MiB 00:05:33.278 size: 814.000000 MiB heap id: 0 00:05:33.278 end heaps---------- 00:05:33.278 8 mempools totaling size 598.116089 MiB 00:05:33.278 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:33.278 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:33.278 size: 84.521057 MiB name: bdev_io_68528 00:05:33.278 size: 51.011292 MiB name: evtpool_68528 00:05:33.278 size: 50.003479 MiB name: msgpool_68528 00:05:33.278 size: 21.763794 MiB name: PDU_Pool 00:05:33.278 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:33.278 size: 0.026123 MiB name: Session_Pool 00:05:33.278 end mempools------- 00:05:33.278 6 memzones totaling size 4.142822 MiB 00:05:33.278 size: 1.000366 MiB name: RG_ring_0_68528 00:05:33.278 size: 1.000366 MiB name: RG_ring_1_68528 00:05:33.278 size: 1.000366 MiB name: RG_ring_4_68528 00:05:33.278 size: 1.000366 MiB name: RG_ring_5_68528 00:05:33.278 size: 0.125366 MiB name: RG_ring_2_68528 00:05:33.278 size: 0.015991 MiB name: RG_ring_3_68528 00:05:33.278 end memzones------- 00:05:33.278 07:11:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:33.278 heap id: 0 total size: 814.000000 MiB number of busy elements: 226 number of free elements: 15 00:05:33.278 list of free elements. size: 12.485474 MiB 00:05:33.278 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:33.278 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:33.278 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:33.278 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:33.278 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:33.278 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:33.278 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:33.278 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:33.278 element at address: 0x200000200000 with size: 0.837219 MiB 00:05:33.278 element at address: 0x20001aa00000 with size: 0.572266 MiB 00:05:33.278 element at address: 0x20000b200000 with size: 0.489441 MiB 00:05:33.278 element at address: 0x200000800000 with size: 0.486877 MiB 00:05:33.278 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:33.278 element at address: 0x200027e00000 with size: 0.397400 MiB 00:05:33.278 element at address: 0x200003a00000 with size: 0.351501 MiB 00:05:33.278 list of standard malloc elements. size: 199.251953 MiB 00:05:33.278 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:33.278 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:33.278 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:33.278 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:33.278 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:33.278 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:33.278 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:33.278 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:33.279 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:33.279 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:33.279 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:33.279 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:05:33.279 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:05:33.279 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:05:33.279 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:05:33.279 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:05:33.279 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:05:33.279 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:05:33.279 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:05:33.279 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:05:33.279 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:33.279 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:33.279 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:33.279 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:33.279 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:33.279 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:33.279 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:33.279 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:33.279 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:33.279 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:33.279 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:33.279 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:33.279 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:33.279 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:33.279 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:33.279 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200027e65bc0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200027e65c80 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200027e6c880 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:33.279 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:33.280 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:33.280 list of memzone associated elements. size: 602.262573 MiB 00:05:33.280 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:33.280 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:33.280 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:33.280 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:33.280 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:33.280 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_68528_0 00:05:33.280 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:33.280 associated memzone info: size: 48.002930 MiB name: MP_evtpool_68528_0 00:05:33.280 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:33.280 associated memzone info: size: 48.002930 MiB name: MP_msgpool_68528_0 00:05:33.280 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:33.280 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:33.280 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:33.280 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:33.280 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:33.280 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_68528 00:05:33.280 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:33.280 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_68528 00:05:33.280 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:33.280 associated memzone info: size: 1.007996 MiB name: MP_evtpool_68528 00:05:33.280 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:33.280 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:33.280 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:33.280 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:33.280 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:33.280 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:33.280 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:33.280 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:33.280 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:33.280 associated memzone info: size: 1.000366 MiB name: RG_ring_0_68528 00:05:33.280 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:33.280 associated memzone info: size: 1.000366 MiB name: RG_ring_1_68528 00:05:33.280 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:33.280 associated memzone info: size: 1.000366 MiB name: RG_ring_4_68528 00:05:33.280 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:33.280 associated memzone info: size: 1.000366 MiB name: RG_ring_5_68528 00:05:33.280 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:33.280 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_68528 00:05:33.280 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:33.280 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:33.280 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:33.280 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:33.280 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:33.280 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:33.280 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:33.280 associated memzone info: size: 0.125366 MiB name: RG_ring_2_68528 00:05:33.280 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:33.280 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:33.280 element at address: 0x200027e65d40 with size: 0.023743 MiB 00:05:33.280 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:33.280 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:33.280 associated memzone info: size: 0.015991 MiB name: RG_ring_3_68528 00:05:33.280 element at address: 0x200027e6be80 with size: 0.002441 MiB 00:05:33.280 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:33.280 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:05:33.280 associated memzone info: size: 0.000183 MiB name: MP_msgpool_68528 00:05:33.280 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:33.280 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_68528 00:05:33.280 element at address: 0x200027e6c940 with size: 0.000305 MiB 00:05:33.280 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:33.280 07:11:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:33.280 07:11:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 68528 00:05:33.280 07:11:34 -- common/autotest_common.sh@926 -- # '[' -z 68528 ']' 00:05:33.280 07:11:34 -- common/autotest_common.sh@930 -- # kill -0 68528 00:05:33.280 07:11:34 -- common/autotest_common.sh@931 -- # uname 00:05:33.280 07:11:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:33.280 07:11:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68528 00:05:33.280 07:11:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:33.280 07:11:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:33.280 killing process with pid 68528 00:05:33.280 07:11:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68528' 00:05:33.280 07:11:34 -- common/autotest_common.sh@945 -- # kill 68528 00:05:33.280 07:11:34 -- common/autotest_common.sh@950 -- # wait 68528 00:05:33.539 00:05:33.539 real 0m1.621s 00:05:33.539 user 0m1.767s 00:05:33.539 sys 0m0.418s 00:05:33.539 07:11:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.539 07:11:35 -- common/autotest_common.sh@10 -- # set +x 00:05:33.539 ************************************ 00:05:33.539 END TEST dpdk_mem_utility 00:05:33.539 ************************************ 00:05:33.539 07:11:35 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:33.539 07:11:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:33.539 07:11:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:33.539 07:11:35 -- common/autotest_common.sh@10 -- # set +x 00:05:33.539 ************************************ 00:05:33.539 START TEST event 00:05:33.539 ************************************ 00:05:33.539 07:11:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:33.798 * Looking for test storage... 00:05:33.798 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:33.798 07:11:35 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:33.798 07:11:35 -- bdev/nbd_common.sh@6 -- # set -e 00:05:33.798 07:11:35 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:33.798 07:11:35 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:33.798 07:11:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:33.798 07:11:35 -- common/autotest_common.sh@10 -- # set +x 00:05:33.798 ************************************ 00:05:33.798 START TEST event_perf 00:05:33.798 ************************************ 00:05:33.798 07:11:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:33.798 Running I/O for 1 seconds...[2024-11-04 07:11:35.453436] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:33.798 [2024-11-04 07:11:35.453521] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68622 ] 00:05:33.798 [2024-11-04 07:11:35.587827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:34.057 [2024-11-04 07:11:35.649759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.057 [2024-11-04 07:11:35.649915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.057 [2024-11-04 07:11:35.650037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.057 [2024-11-04 07:11:35.650038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:34.992 Running I/O for 1 seconds... 00:05:34.992 lcore 0: 144944 00:05:34.992 lcore 1: 144946 00:05:34.992 lcore 2: 144944 00:05:34.992 lcore 3: 144947 00:05:34.992 done. 00:05:34.992 00:05:34.992 real 0m1.316s 00:05:34.992 user 0m4.132s 00:05:34.992 sys 0m0.065s 00:05:34.992 07:11:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.992 ************************************ 00:05:34.992 END TEST event_perf 00:05:34.992 ************************************ 00:05:34.992 07:11:36 -- common/autotest_common.sh@10 -- # set +x 00:05:34.992 07:11:36 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:34.992 07:11:36 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:34.992 07:11:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:34.992 07:11:36 -- common/autotest_common.sh@10 -- # set +x 00:05:34.992 ************************************ 00:05:34.992 START TEST event_reactor 00:05:34.992 ************************************ 00:05:34.992 07:11:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:34.992 [2024-11-04 07:11:36.828716] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:34.992 [2024-11-04 07:11:36.828812] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68655 ] 00:05:35.252 [2024-11-04 07:11:36.966740] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.252 [2024-11-04 07:11:37.035045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.656 test_start 00:05:36.656 oneshot 00:05:36.656 tick 100 00:05:36.656 tick 100 00:05:36.656 tick 250 00:05:36.656 tick 100 00:05:36.656 tick 100 00:05:36.656 tick 100 00:05:36.656 tick 250 00:05:36.656 tick 500 00:05:36.656 tick 100 00:05:36.656 tick 100 00:05:36.656 tick 250 00:05:36.656 tick 100 00:05:36.656 tick 100 00:05:36.656 test_end 00:05:36.656 00:05:36.656 real 0m1.317s 00:05:36.656 user 0m1.150s 00:05:36.656 sys 0m0.062s 00:05:36.656 07:11:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.656 ************************************ 00:05:36.656 END TEST event_reactor 00:05:36.656 ************************************ 00:05:36.656 07:11:38 -- common/autotest_common.sh@10 -- # set +x 00:05:36.656 07:11:38 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:36.656 07:11:38 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:36.656 07:11:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:36.656 07:11:38 -- common/autotest_common.sh@10 -- # set +x 00:05:36.656 ************************************ 00:05:36.656 START TEST event_reactor_perf 00:05:36.656 ************************************ 00:05:36.657 07:11:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:36.657 [2024-11-04 07:11:38.196165] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:36.657 [2024-11-04 07:11:38.196253] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68696 ] 00:05:36.657 [2024-11-04 07:11:38.332063] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.657 [2024-11-04 07:11:38.399401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.032 test_start 00:05:38.032 test_end 00:05:38.032 Performance: 478186 events per second 00:05:38.032 00:05:38.032 real 0m1.286s 00:05:38.032 user 0m1.113s 00:05:38.032 sys 0m0.067s 00:05:38.032 07:11:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.032 ************************************ 00:05:38.032 END TEST event_reactor_perf 00:05:38.032 ************************************ 00:05:38.032 07:11:39 -- common/autotest_common.sh@10 -- # set +x 00:05:38.032 07:11:39 -- event/event.sh@49 -- # uname -s 00:05:38.032 07:11:39 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:38.032 07:11:39 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:38.032 07:11:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:38.032 07:11:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:38.032 07:11:39 -- common/autotest_common.sh@10 -- # set +x 00:05:38.032 ************************************ 00:05:38.032 START TEST event_scheduler 00:05:38.032 ************************************ 00:05:38.032 07:11:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:38.032 * Looking for test storage... 00:05:38.032 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:38.032 07:11:39 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:38.032 07:11:39 -- scheduler/scheduler.sh@35 -- # scheduler_pid=68751 00:05:38.032 07:11:39 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:38.032 07:11:39 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:38.032 07:11:39 -- scheduler/scheduler.sh@37 -- # waitforlisten 68751 00:05:38.032 07:11:39 -- common/autotest_common.sh@819 -- # '[' -z 68751 ']' 00:05:38.032 07:11:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.032 07:11:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:38.032 07:11:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.032 07:11:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:38.032 07:11:39 -- common/autotest_common.sh@10 -- # set +x 00:05:38.032 [2024-11-04 07:11:39.661964] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:38.032 [2024-11-04 07:11:39.662070] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68751 ] 00:05:38.032 [2024-11-04 07:11:39.804345] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:38.291 [2024-11-04 07:11:39.886183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.291 [2024-11-04 07:11:39.886354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.291 [2024-11-04 07:11:39.887468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:38.291 [2024-11-04 07:11:39.887537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:38.291 07:11:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:38.291 07:11:39 -- common/autotest_common.sh@852 -- # return 0 00:05:38.291 07:11:39 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:38.291 07:11:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:38.291 07:11:39 -- common/autotest_common.sh@10 -- # set +x 00:05:38.291 POWER: Env isn't set yet! 00:05:38.291 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:38.291 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:38.291 POWER: Cannot set governor of lcore 0 to userspace 00:05:38.291 POWER: Attempting to initialise PSTAT power management... 00:05:38.291 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:38.291 POWER: Cannot set governor of lcore 0 to performance 00:05:38.291 POWER: Attempting to initialise AMD PSTATE power management... 00:05:38.291 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:38.291 POWER: Cannot set governor of lcore 0 to userspace 00:05:38.291 POWER: Attempting to initialise CPPC power management... 00:05:38.291 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:38.291 POWER: Cannot set governor of lcore 0 to userspace 00:05:38.291 POWER: Attempting to initialise VM power management... 00:05:38.291 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:38.291 POWER: Unable to set Power Management Environment for lcore 0 00:05:38.291 [2024-11-04 07:11:39.943208] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:05:38.291 [2024-11-04 07:11:39.943227] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:05:38.291 [2024-11-04 07:11:39.943242] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:05:38.291 [2024-11-04 07:11:39.943262] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:38.291 [2024-11-04 07:11:39.943274] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:38.291 [2024-11-04 07:11:39.943285] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:38.291 07:11:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:38.291 07:11:39 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:38.291 07:11:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:38.291 07:11:39 -- common/autotest_common.sh@10 -- # set +x 00:05:38.291 [2024-11-04 07:11:40.077563] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:38.291 07:11:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:38.291 07:11:40 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:38.291 07:11:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:38.291 07:11:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:38.291 07:11:40 -- common/autotest_common.sh@10 -- # set +x 00:05:38.291 ************************************ 00:05:38.291 START TEST scheduler_create_thread 00:05:38.291 ************************************ 00:05:38.291 07:11:40 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:05:38.291 07:11:40 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:38.291 07:11:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:38.291 07:11:40 -- common/autotest_common.sh@10 -- # set +x 00:05:38.291 2 00:05:38.291 07:11:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:38.291 07:11:40 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:38.291 07:11:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:38.291 07:11:40 -- common/autotest_common.sh@10 -- # set +x 00:05:38.291 3 00:05:38.291 07:11:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:38.291 07:11:40 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:38.291 07:11:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:38.291 07:11:40 -- common/autotest_common.sh@10 -- # set +x 00:05:38.291 4 00:05:38.291 07:11:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:38.291 07:11:40 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:38.291 07:11:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:38.291 07:11:40 -- common/autotest_common.sh@10 -- # set +x 00:05:38.291 5 00:05:38.550 07:11:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:38.550 07:11:40 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:38.550 07:11:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:38.550 07:11:40 -- common/autotest_common.sh@10 -- # set +x 00:05:38.550 6 00:05:38.550 07:11:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:38.550 07:11:40 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:38.550 07:11:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:38.550 07:11:40 -- common/autotest_common.sh@10 -- # set +x 00:05:38.550 7 00:05:38.550 07:11:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:38.550 07:11:40 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:38.550 07:11:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:38.550 07:11:40 -- common/autotest_common.sh@10 -- # set +x 00:05:38.550 8 00:05:38.550 07:11:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:38.550 07:11:40 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:38.550 07:11:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:38.550 07:11:40 -- common/autotest_common.sh@10 -- # set +x 00:05:38.550 9 00:05:38.550 07:11:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:38.550 07:11:40 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:38.550 07:11:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:38.550 07:11:40 -- common/autotest_common.sh@10 -- # set +x 00:05:38.550 10 00:05:38.550 07:11:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:38.550 07:11:40 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:38.550 07:11:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:38.550 07:11:40 -- common/autotest_common.sh@10 -- # set +x 00:05:38.550 07:11:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:38.550 07:11:40 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:38.550 07:11:40 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:38.550 07:11:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:38.550 07:11:40 -- common/autotest_common.sh@10 -- # set +x 00:05:38.550 07:11:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:38.550 07:11:40 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:38.550 07:11:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:38.550 07:11:40 -- common/autotest_common.sh@10 -- # set +x 00:05:39.925 07:11:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:39.925 07:11:41 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:39.925 07:11:41 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:39.925 07:11:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:39.925 07:11:41 -- common/autotest_common.sh@10 -- # set +x 00:05:41.301 07:11:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.301 00:05:41.301 real 0m2.615s 00:05:41.301 user 0m0.018s 00:05:41.301 sys 0m0.006s 00:05:41.301 07:11:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.301 07:11:42 -- common/autotest_common.sh@10 -- # set +x 00:05:41.301 ************************************ 00:05:41.301 END TEST scheduler_create_thread 00:05:41.301 ************************************ 00:05:41.301 07:11:42 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:41.301 07:11:42 -- scheduler/scheduler.sh@46 -- # killprocess 68751 00:05:41.301 07:11:42 -- common/autotest_common.sh@926 -- # '[' -z 68751 ']' 00:05:41.301 07:11:42 -- common/autotest_common.sh@930 -- # kill -0 68751 00:05:41.301 07:11:42 -- common/autotest_common.sh@931 -- # uname 00:05:41.301 07:11:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:41.301 07:11:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68751 00:05:41.301 07:11:42 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:41.301 07:11:42 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:41.301 killing process with pid 68751 00:05:41.301 07:11:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68751' 00:05:41.301 07:11:42 -- common/autotest_common.sh@945 -- # kill 68751 00:05:41.301 07:11:42 -- common/autotest_common.sh@950 -- # wait 68751 00:05:41.559 [2024-11-04 07:11:43.184977] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:41.818 00:05:41.818 real 0m3.916s 00:05:41.818 user 0m5.754s 00:05:41.818 sys 0m0.392s 00:05:41.818 07:11:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.818 07:11:43 -- common/autotest_common.sh@10 -- # set +x 00:05:41.818 ************************************ 00:05:41.818 END TEST event_scheduler 00:05:41.818 ************************************ 00:05:41.818 07:11:43 -- event/event.sh@51 -- # modprobe -n nbd 00:05:41.818 07:11:43 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:41.818 07:11:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:41.818 07:11:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.818 07:11:43 -- common/autotest_common.sh@10 -- # set +x 00:05:41.818 ************************************ 00:05:41.818 START TEST app_repeat 00:05:41.818 ************************************ 00:05:41.818 07:11:43 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:05:41.818 07:11:43 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.818 07:11:43 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.818 07:11:43 -- event/event.sh@13 -- # local nbd_list 00:05:41.818 07:11:43 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.818 07:11:43 -- event/event.sh@14 -- # local bdev_list 00:05:41.818 07:11:43 -- event/event.sh@15 -- # local repeat_times=4 00:05:41.818 07:11:43 -- event/event.sh@17 -- # modprobe nbd 00:05:41.818 07:11:43 -- event/event.sh@19 -- # repeat_pid=68855 00:05:41.818 07:11:43 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.818 Process app_repeat pid: 68855 00:05:41.818 07:11:43 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 68855' 00:05:41.818 07:11:43 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:41.818 07:11:43 -- event/event.sh@23 -- # for i in {0..2} 00:05:41.818 spdk_app_start Round 0 00:05:41.818 07:11:43 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:41.818 07:11:43 -- event/event.sh@25 -- # waitforlisten 68855 /var/tmp/spdk-nbd.sock 00:05:41.818 07:11:43 -- common/autotest_common.sh@819 -- # '[' -z 68855 ']' 00:05:41.818 07:11:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:41.818 07:11:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:41.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:41.818 07:11:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:41.818 07:11:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:41.818 07:11:43 -- common/autotest_common.sh@10 -- # set +x 00:05:41.818 [2024-11-04 07:11:43.535996] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:41.818 [2024-11-04 07:11:43.536076] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68855 ] 00:05:42.076 [2024-11-04 07:11:43.667730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.076 [2024-11-04 07:11:43.730983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.076 [2024-11-04 07:11:43.730995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.011 07:11:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:43.011 07:11:44 -- common/autotest_common.sh@852 -- # return 0 00:05:43.011 07:11:44 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.011 Malloc0 00:05:43.011 07:11:44 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.268 Malloc1 00:05:43.268 07:11:45 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.268 07:11:45 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.268 07:11:45 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.268 07:11:45 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:43.268 07:11:45 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.268 07:11:45 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:43.268 07:11:45 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.268 07:11:45 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.268 07:11:45 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.268 07:11:45 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:43.268 07:11:45 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.268 07:11:45 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:43.268 07:11:45 -- bdev/nbd_common.sh@12 -- # local i 00:05:43.268 07:11:45 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:43.268 07:11:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.268 07:11:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:43.526 /dev/nbd0 00:05:43.526 07:11:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:43.526 07:11:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:43.526 07:11:45 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:43.526 07:11:45 -- common/autotest_common.sh@857 -- # local i 00:05:43.526 07:11:45 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:43.526 07:11:45 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:43.526 07:11:45 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:43.526 07:11:45 -- common/autotest_common.sh@861 -- # break 00:05:43.526 07:11:45 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:43.526 07:11:45 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:43.526 07:11:45 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.526 1+0 records in 00:05:43.526 1+0 records out 00:05:43.526 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266947 s, 15.3 MB/s 00:05:43.526 07:11:45 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:43.526 07:11:45 -- common/autotest_common.sh@874 -- # size=4096 00:05:43.526 07:11:45 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:43.526 07:11:45 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:43.526 07:11:45 -- common/autotest_common.sh@877 -- # return 0 00:05:43.526 07:11:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.526 07:11:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.526 07:11:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:43.785 /dev/nbd1 00:05:44.043 07:11:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:44.043 07:11:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:44.043 07:11:45 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:44.043 07:11:45 -- common/autotest_common.sh@857 -- # local i 00:05:44.043 07:11:45 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:44.043 07:11:45 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:44.043 07:11:45 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:44.043 07:11:45 -- common/autotest_common.sh@861 -- # break 00:05:44.043 07:11:45 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:44.043 07:11:45 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:44.043 07:11:45 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.043 1+0 records in 00:05:44.043 1+0 records out 00:05:44.043 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402657 s, 10.2 MB/s 00:05:44.043 07:11:45 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.043 07:11:45 -- common/autotest_common.sh@874 -- # size=4096 00:05:44.043 07:11:45 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.043 07:11:45 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:44.043 07:11:45 -- common/autotest_common.sh@877 -- # return 0 00:05:44.043 07:11:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.043 07:11:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.043 07:11:45 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.043 07:11:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.043 07:11:45 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.043 07:11:45 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:44.043 { 00:05:44.043 "bdev_name": "Malloc0", 00:05:44.043 "nbd_device": "/dev/nbd0" 00:05:44.043 }, 00:05:44.043 { 00:05:44.043 "bdev_name": "Malloc1", 00:05:44.043 "nbd_device": "/dev/nbd1" 00:05:44.043 } 00:05:44.043 ]' 00:05:44.043 07:11:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.043 07:11:45 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:44.043 { 00:05:44.043 "bdev_name": "Malloc0", 00:05:44.043 "nbd_device": "/dev/nbd0" 00:05:44.043 }, 00:05:44.043 { 00:05:44.043 "bdev_name": "Malloc1", 00:05:44.043 "nbd_device": "/dev/nbd1" 00:05:44.043 } 00:05:44.043 ]' 00:05:44.306 07:11:45 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:44.306 /dev/nbd1' 00:05:44.306 07:11:45 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:44.306 /dev/nbd1' 00:05:44.307 07:11:45 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.307 07:11:45 -- bdev/nbd_common.sh@65 -- # count=2 00:05:44.307 07:11:45 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:44.307 07:11:45 -- bdev/nbd_common.sh@95 -- # count=2 00:05:44.307 07:11:45 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:44.307 07:11:45 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:44.307 07:11:45 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.307 07:11:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.307 07:11:45 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:44.307 07:11:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:44.307 07:11:45 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:44.307 07:11:45 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:44.307 256+0 records in 00:05:44.307 256+0 records out 00:05:44.307 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00411776 s, 255 MB/s 00:05:44.307 07:11:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.307 07:11:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:44.307 256+0 records in 00:05:44.307 256+0 records out 00:05:44.307 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247895 s, 42.3 MB/s 00:05:44.307 07:11:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.307 07:11:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:44.307 256+0 records in 00:05:44.307 256+0 records out 00:05:44.307 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0280341 s, 37.4 MB/s 00:05:44.307 07:11:45 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:44.307 07:11:45 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.307 07:11:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.307 07:11:45 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:44.307 07:11:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:44.307 07:11:45 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:44.307 07:11:45 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:44.307 07:11:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.307 07:11:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:44.307 07:11:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.307 07:11:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:44.307 07:11:45 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:44.307 07:11:45 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:44.307 07:11:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.307 07:11:45 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.307 07:11:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:44.307 07:11:45 -- bdev/nbd_common.sh@51 -- # local i 00:05:44.307 07:11:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.307 07:11:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:44.565 07:11:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:44.565 07:11:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:44.565 07:11:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:44.565 07:11:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.565 07:11:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.565 07:11:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:44.565 07:11:46 -- bdev/nbd_common.sh@41 -- # break 00:05:44.565 07:11:46 -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.565 07:11:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.565 07:11:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:44.823 07:11:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:44.823 07:11:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:44.823 07:11:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:44.823 07:11:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.823 07:11:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.823 07:11:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:44.823 07:11:46 -- bdev/nbd_common.sh@41 -- # break 00:05:44.823 07:11:46 -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.823 07:11:46 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.823 07:11:46 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.823 07:11:46 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.081 07:11:46 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:45.081 07:11:46 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.081 07:11:46 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:45.081 07:11:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:45.081 07:11:46 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:45.081 07:11:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.340 07:11:46 -- bdev/nbd_common.sh@65 -- # true 00:05:45.340 07:11:46 -- bdev/nbd_common.sh@65 -- # count=0 00:05:45.340 07:11:46 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:45.341 07:11:46 -- bdev/nbd_common.sh@104 -- # count=0 00:05:45.341 07:11:46 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:45.341 07:11:46 -- bdev/nbd_common.sh@109 -- # return 0 00:05:45.341 07:11:46 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:45.600 07:11:47 -- event/event.sh@35 -- # sleep 3 00:05:45.600 [2024-11-04 07:11:47.353485] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.600 [2024-11-04 07:11:47.395546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.600 [2024-11-04 07:11:47.395565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.859 [2024-11-04 07:11:47.447703] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:45.859 [2024-11-04 07:11:47.447769] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:48.391 07:11:50 -- event/event.sh@23 -- # for i in {0..2} 00:05:48.391 spdk_app_start Round 1 00:05:48.391 07:11:50 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:48.391 07:11:50 -- event/event.sh@25 -- # waitforlisten 68855 /var/tmp/spdk-nbd.sock 00:05:48.391 07:11:50 -- common/autotest_common.sh@819 -- # '[' -z 68855 ']' 00:05:48.391 07:11:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:48.391 07:11:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:48.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:48.391 07:11:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:48.391 07:11:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:48.391 07:11:50 -- common/autotest_common.sh@10 -- # set +x 00:05:48.649 07:11:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:48.649 07:11:50 -- common/autotest_common.sh@852 -- # return 0 00:05:48.649 07:11:50 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.908 Malloc0 00:05:48.908 07:11:50 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.475 Malloc1 00:05:49.475 07:11:51 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.475 07:11:51 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.475 07:11:51 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.475 07:11:51 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:49.475 07:11:51 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.475 07:11:51 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:49.475 07:11:51 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.475 07:11:51 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.475 07:11:51 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.475 07:11:51 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:49.475 07:11:51 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.475 07:11:51 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:49.475 07:11:51 -- bdev/nbd_common.sh@12 -- # local i 00:05:49.475 07:11:51 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:49.475 07:11:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.475 07:11:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:49.475 /dev/nbd0 00:05:49.475 07:11:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:49.475 07:11:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:49.475 07:11:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:49.475 07:11:51 -- common/autotest_common.sh@857 -- # local i 00:05:49.475 07:11:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:49.475 07:11:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:49.475 07:11:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:49.475 07:11:51 -- common/autotest_common.sh@861 -- # break 00:05:49.475 07:11:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:49.475 07:11:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:49.475 07:11:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.475 1+0 records in 00:05:49.475 1+0 records out 00:05:49.475 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228422 s, 17.9 MB/s 00:05:49.475 07:11:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.475 07:11:51 -- common/autotest_common.sh@874 -- # size=4096 00:05:49.475 07:11:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.475 07:11:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:49.475 07:11:51 -- common/autotest_common.sh@877 -- # return 0 00:05:49.475 07:11:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.475 07:11:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.475 07:11:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:49.735 /dev/nbd1 00:05:49.735 07:11:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:49.735 07:11:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:49.735 07:11:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:49.735 07:11:51 -- common/autotest_common.sh@857 -- # local i 00:05:49.735 07:11:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:49.735 07:11:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:49.735 07:11:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:49.735 07:11:51 -- common/autotest_common.sh@861 -- # break 00:05:49.735 07:11:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:49.735 07:11:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:49.735 07:11:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.735 1+0 records in 00:05:49.735 1+0 records out 00:05:49.735 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334285 s, 12.3 MB/s 00:05:49.735 07:11:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.735 07:11:51 -- common/autotest_common.sh@874 -- # size=4096 00:05:49.735 07:11:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.735 07:11:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:49.735 07:11:51 -- common/autotest_common.sh@877 -- # return 0 00:05:49.735 07:11:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.735 07:11:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.735 07:11:51 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.735 07:11:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.735 07:11:51 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.303 07:11:51 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:50.303 { 00:05:50.303 "bdev_name": "Malloc0", 00:05:50.303 "nbd_device": "/dev/nbd0" 00:05:50.303 }, 00:05:50.303 { 00:05:50.303 "bdev_name": "Malloc1", 00:05:50.303 "nbd_device": "/dev/nbd1" 00:05:50.304 } 00:05:50.304 ]' 00:05:50.304 07:11:51 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:50.304 { 00:05:50.304 "bdev_name": "Malloc0", 00:05:50.304 "nbd_device": "/dev/nbd0" 00:05:50.304 }, 00:05:50.304 { 00:05:50.304 "bdev_name": "Malloc1", 00:05:50.304 "nbd_device": "/dev/nbd1" 00:05:50.304 } 00:05:50.304 ]' 00:05:50.304 07:11:51 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.304 07:11:51 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:50.304 /dev/nbd1' 00:05:50.304 07:11:51 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:50.304 /dev/nbd1' 00:05:50.304 07:11:51 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.304 07:11:51 -- bdev/nbd_common.sh@65 -- # count=2 00:05:50.304 07:11:51 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:50.304 07:11:51 -- bdev/nbd_common.sh@95 -- # count=2 00:05:50.304 07:11:51 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:50.304 07:11:51 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:50.304 07:11:51 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.304 07:11:51 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.304 07:11:51 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:50.304 07:11:51 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:50.304 07:11:51 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:50.304 07:11:51 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:50.304 256+0 records in 00:05:50.304 256+0 records out 00:05:50.304 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00601145 s, 174 MB/s 00:05:50.304 07:11:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.304 07:11:51 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:50.304 256+0 records in 00:05:50.304 256+0 records out 00:05:50.304 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242943 s, 43.2 MB/s 00:05:50.304 07:11:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.304 07:11:51 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:50.304 256+0 records in 00:05:50.304 256+0 records out 00:05:50.304 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0276755 s, 37.9 MB/s 00:05:50.304 07:11:51 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:50.304 07:11:51 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.304 07:11:51 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.304 07:11:51 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:50.304 07:11:51 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:50.304 07:11:51 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:50.304 07:11:51 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:50.304 07:11:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.304 07:11:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:50.304 07:11:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.304 07:11:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:50.304 07:11:51 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:50.304 07:11:51 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:50.304 07:11:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.304 07:11:51 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.304 07:11:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:50.304 07:11:51 -- bdev/nbd_common.sh@51 -- # local i 00:05:50.304 07:11:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.304 07:11:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:50.563 07:11:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:50.563 07:11:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:50.563 07:11:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:50.563 07:11:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.563 07:11:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.563 07:11:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:50.563 07:11:52 -- bdev/nbd_common.sh@41 -- # break 00:05:50.563 07:11:52 -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.563 07:11:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.563 07:11:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:50.822 07:11:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:50.822 07:11:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:50.822 07:11:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:50.822 07:11:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.822 07:11:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.822 07:11:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:50.822 07:11:52 -- bdev/nbd_common.sh@41 -- # break 00:05:50.822 07:11:52 -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.822 07:11:52 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.822 07:11:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.822 07:11:52 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.092 07:11:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:51.092 07:11:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.092 07:11:52 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:51.092 07:11:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:51.092 07:11:52 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:51.092 07:11:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.092 07:11:52 -- bdev/nbd_common.sh@65 -- # true 00:05:51.092 07:11:52 -- bdev/nbd_common.sh@65 -- # count=0 00:05:51.092 07:11:52 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:51.092 07:11:52 -- bdev/nbd_common.sh@104 -- # count=0 00:05:51.092 07:11:52 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:51.092 07:11:52 -- bdev/nbd_common.sh@109 -- # return 0 00:05:51.092 07:11:52 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:51.363 07:11:53 -- event/event.sh@35 -- # sleep 3 00:05:51.363 [2024-11-04 07:11:53.153058] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:51.363 [2024-11-04 07:11:53.195642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.363 [2024-11-04 07:11:53.195655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.621 [2024-11-04 07:11:53.248270] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:51.621 [2024-11-04 07:11:53.248337] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:54.907 07:11:56 -- event/event.sh@23 -- # for i in {0..2} 00:05:54.907 spdk_app_start Round 2 00:05:54.907 07:11:56 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:54.907 07:11:56 -- event/event.sh@25 -- # waitforlisten 68855 /var/tmp/spdk-nbd.sock 00:05:54.907 07:11:56 -- common/autotest_common.sh@819 -- # '[' -z 68855 ']' 00:05:54.907 07:11:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.907 07:11:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:54.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.907 07:11:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.907 07:11:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:54.907 07:11:56 -- common/autotest_common.sh@10 -- # set +x 00:05:54.907 07:11:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:54.907 07:11:56 -- common/autotest_common.sh@852 -- # return 0 00:05:54.907 07:11:56 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.907 Malloc0 00:05:54.907 07:11:56 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.907 Malloc1 00:05:55.166 07:11:56 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.166 07:11:56 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.166 07:11:56 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.166 07:11:56 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:55.166 07:11:56 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.166 07:11:56 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:55.166 07:11:56 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.166 07:11:56 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.166 07:11:56 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.166 07:11:56 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:55.166 07:11:56 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.166 07:11:56 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:55.166 07:11:56 -- bdev/nbd_common.sh@12 -- # local i 00:05:55.166 07:11:56 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:55.166 07:11:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.166 07:11:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:55.426 /dev/nbd0 00:05:55.426 07:11:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:55.426 07:11:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:55.426 07:11:57 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:55.426 07:11:57 -- common/autotest_common.sh@857 -- # local i 00:05:55.426 07:11:57 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:55.426 07:11:57 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:55.426 07:11:57 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:55.426 07:11:57 -- common/autotest_common.sh@861 -- # break 00:05:55.426 07:11:57 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:55.426 07:11:57 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:55.426 07:11:57 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.426 1+0 records in 00:05:55.426 1+0 records out 00:05:55.426 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363881 s, 11.3 MB/s 00:05:55.426 07:11:57 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.426 07:11:57 -- common/autotest_common.sh@874 -- # size=4096 00:05:55.426 07:11:57 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.426 07:11:57 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:55.426 07:11:57 -- common/autotest_common.sh@877 -- # return 0 00:05:55.426 07:11:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.426 07:11:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.426 07:11:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:55.426 /dev/nbd1 00:05:55.684 07:11:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:55.684 07:11:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:55.684 07:11:57 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:55.684 07:11:57 -- common/autotest_common.sh@857 -- # local i 00:05:55.684 07:11:57 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:55.684 07:11:57 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:55.684 07:11:57 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:55.684 07:11:57 -- common/autotest_common.sh@861 -- # break 00:05:55.684 07:11:57 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:55.684 07:11:57 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:55.684 07:11:57 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.684 1+0 records in 00:05:55.684 1+0 records out 00:05:55.684 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036943 s, 11.1 MB/s 00:05:55.684 07:11:57 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.684 07:11:57 -- common/autotest_common.sh@874 -- # size=4096 00:05:55.684 07:11:57 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.684 07:11:57 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:55.684 07:11:57 -- common/autotest_common.sh@877 -- # return 0 00:05:55.684 07:11:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.684 07:11:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.684 07:11:57 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.684 07:11:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.684 07:11:57 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:55.944 { 00:05:55.944 "bdev_name": "Malloc0", 00:05:55.944 "nbd_device": "/dev/nbd0" 00:05:55.944 }, 00:05:55.944 { 00:05:55.944 "bdev_name": "Malloc1", 00:05:55.944 "nbd_device": "/dev/nbd1" 00:05:55.944 } 00:05:55.944 ]' 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:55.944 { 00:05:55.944 "bdev_name": "Malloc0", 00:05:55.944 "nbd_device": "/dev/nbd0" 00:05:55.944 }, 00:05:55.944 { 00:05:55.944 "bdev_name": "Malloc1", 00:05:55.944 "nbd_device": "/dev/nbd1" 00:05:55.944 } 00:05:55.944 ]' 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:55.944 /dev/nbd1' 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:55.944 /dev/nbd1' 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@65 -- # count=2 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@95 -- # count=2 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:55.944 256+0 records in 00:05:55.944 256+0 records out 00:05:55.944 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00735826 s, 143 MB/s 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:55.944 256+0 records in 00:05:55.944 256+0 records out 00:05:55.944 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251043 s, 41.8 MB/s 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:55.944 256+0 records in 00:05:55.944 256+0 records out 00:05:55.944 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0308208 s, 34.0 MB/s 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@51 -- # local i 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.944 07:11:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:56.203 07:11:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:56.203 07:11:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:56.203 07:11:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:56.203 07:11:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.203 07:11:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.203 07:11:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:56.203 07:11:58 -- bdev/nbd_common.sh@41 -- # break 00:05:56.203 07:11:58 -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.203 07:11:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.203 07:11:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:56.770 07:11:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:56.770 07:11:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:56.770 07:11:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:56.770 07:11:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.770 07:11:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.770 07:11:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:56.770 07:11:58 -- bdev/nbd_common.sh@41 -- # break 00:05:56.770 07:11:58 -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.770 07:11:58 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.770 07:11:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.770 07:11:58 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.770 07:11:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:56.770 07:11:58 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:56.770 07:11:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.030 07:11:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:57.030 07:11:58 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:57.030 07:11:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.030 07:11:58 -- bdev/nbd_common.sh@65 -- # true 00:05:57.030 07:11:58 -- bdev/nbd_common.sh@65 -- # count=0 00:05:57.030 07:11:58 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:57.030 07:11:58 -- bdev/nbd_common.sh@104 -- # count=0 00:05:57.030 07:11:58 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:57.030 07:11:58 -- bdev/nbd_common.sh@109 -- # return 0 00:05:57.030 07:11:58 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:57.289 07:11:58 -- event/event.sh@35 -- # sleep 3 00:05:57.289 [2024-11-04 07:11:59.040452] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.289 [2024-11-04 07:11:59.082203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.289 [2024-11-04 07:11:59.082217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.548 [2024-11-04 07:11:59.134795] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:57.548 [2024-11-04 07:11:59.134857] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:00.079 07:12:01 -- event/event.sh@38 -- # waitforlisten 68855 /var/tmp/spdk-nbd.sock 00:06:00.079 07:12:01 -- common/autotest_common.sh@819 -- # '[' -z 68855 ']' 00:06:00.079 07:12:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.079 07:12:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:00.079 07:12:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.079 07:12:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:00.079 07:12:01 -- common/autotest_common.sh@10 -- # set +x 00:06:00.337 07:12:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:00.337 07:12:02 -- common/autotest_common.sh@852 -- # return 0 00:06:00.337 07:12:02 -- event/event.sh@39 -- # killprocess 68855 00:06:00.337 07:12:02 -- common/autotest_common.sh@926 -- # '[' -z 68855 ']' 00:06:00.337 07:12:02 -- common/autotest_common.sh@930 -- # kill -0 68855 00:06:00.337 07:12:02 -- common/autotest_common.sh@931 -- # uname 00:06:00.337 07:12:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:00.337 07:12:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68855 00:06:00.337 07:12:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:00.337 killing process with pid 68855 00:06:00.337 07:12:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:00.337 07:12:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68855' 00:06:00.337 07:12:02 -- common/autotest_common.sh@945 -- # kill 68855 00:06:00.337 07:12:02 -- common/autotest_common.sh@950 -- # wait 68855 00:06:00.596 spdk_app_start is called in Round 0. 00:06:00.596 Shutdown signal received, stop current app iteration 00:06:00.596 Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 reinitialization... 00:06:00.596 spdk_app_start is called in Round 1. 00:06:00.596 Shutdown signal received, stop current app iteration 00:06:00.596 Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 reinitialization... 00:06:00.596 spdk_app_start is called in Round 2. 00:06:00.596 Shutdown signal received, stop current app iteration 00:06:00.596 Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 reinitialization... 00:06:00.596 spdk_app_start is called in Round 3. 00:06:00.596 Shutdown signal received, stop current app iteration 00:06:00.596 07:12:02 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:00.596 07:12:02 -- event/event.sh@42 -- # return 0 00:06:00.596 00:06:00.596 real 0m18.834s 00:06:00.596 user 0m42.670s 00:06:00.596 sys 0m2.723s 00:06:00.596 07:12:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.596 ************************************ 00:06:00.596 END TEST app_repeat 00:06:00.596 ************************************ 00:06:00.596 07:12:02 -- common/autotest_common.sh@10 -- # set +x 00:06:00.596 07:12:02 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:00.596 07:12:02 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:00.596 07:12:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:00.596 07:12:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:00.596 07:12:02 -- common/autotest_common.sh@10 -- # set +x 00:06:00.596 ************************************ 00:06:00.596 START TEST cpu_locks 00:06:00.596 ************************************ 00:06:00.596 07:12:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:00.855 * Looking for test storage... 00:06:00.855 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:00.855 07:12:02 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:00.855 07:12:02 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:00.855 07:12:02 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:00.855 07:12:02 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:00.856 07:12:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:00.856 07:12:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:00.856 07:12:02 -- common/autotest_common.sh@10 -- # set +x 00:06:00.856 ************************************ 00:06:00.856 START TEST default_locks 00:06:00.856 ************************************ 00:06:00.856 07:12:02 -- common/autotest_common.sh@1104 -- # default_locks 00:06:00.856 07:12:02 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=69473 00:06:00.856 07:12:02 -- event/cpu_locks.sh@47 -- # waitforlisten 69473 00:06:00.856 07:12:02 -- common/autotest_common.sh@819 -- # '[' -z 69473 ']' 00:06:00.856 07:12:02 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.856 07:12:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.856 07:12:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:00.856 07:12:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.856 07:12:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:00.856 07:12:02 -- common/autotest_common.sh@10 -- # set +x 00:06:00.856 [2024-11-04 07:12:02.545701] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:00.856 [2024-11-04 07:12:02.545819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69473 ] 00:06:00.856 [2024-11-04 07:12:02.683436] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.115 [2024-11-04 07:12:02.744512] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:01.115 [2024-11-04 07:12:02.744685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.681 07:12:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:01.681 07:12:03 -- common/autotest_common.sh@852 -- # return 0 00:06:01.681 07:12:03 -- event/cpu_locks.sh@49 -- # locks_exist 69473 00:06:01.681 07:12:03 -- event/cpu_locks.sh@22 -- # lslocks -p 69473 00:06:01.681 07:12:03 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.940 07:12:03 -- event/cpu_locks.sh@50 -- # killprocess 69473 00:06:01.940 07:12:03 -- common/autotest_common.sh@926 -- # '[' -z 69473 ']' 00:06:01.940 07:12:03 -- common/autotest_common.sh@930 -- # kill -0 69473 00:06:01.940 07:12:03 -- common/autotest_common.sh@931 -- # uname 00:06:01.940 07:12:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:01.940 07:12:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69473 00:06:01.940 07:12:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:01.940 07:12:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:01.940 07:12:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69473' 00:06:01.940 killing process with pid 69473 00:06:01.940 07:12:03 -- common/autotest_common.sh@945 -- # kill 69473 00:06:01.940 07:12:03 -- common/autotest_common.sh@950 -- # wait 69473 00:06:02.508 07:12:04 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 69473 00:06:02.508 07:12:04 -- common/autotest_common.sh@640 -- # local es=0 00:06:02.508 07:12:04 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 69473 00:06:02.508 07:12:04 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:02.508 07:12:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:02.508 07:12:04 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:02.508 07:12:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:02.508 07:12:04 -- common/autotest_common.sh@643 -- # waitforlisten 69473 00:06:02.508 07:12:04 -- common/autotest_common.sh@819 -- # '[' -z 69473 ']' 00:06:02.508 07:12:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.508 07:12:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:02.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.508 07:12:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.508 07:12:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:02.508 07:12:04 -- common/autotest_common.sh@10 -- # set +x 00:06:02.508 ERROR: process (pid: 69473) is no longer running 00:06:02.508 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (69473) - No such process 00:06:02.508 07:12:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:02.508 07:12:04 -- common/autotest_common.sh@852 -- # return 1 00:06:02.508 07:12:04 -- common/autotest_common.sh@643 -- # es=1 00:06:02.508 07:12:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:02.508 07:12:04 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:02.508 07:12:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:02.508 07:12:04 -- event/cpu_locks.sh@54 -- # no_locks 00:06:02.508 07:12:04 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:02.508 07:12:04 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:02.508 07:12:04 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:02.508 00:06:02.508 real 0m1.589s 00:06:02.508 user 0m1.640s 00:06:02.508 sys 0m0.482s 00:06:02.508 ************************************ 00:06:02.508 END TEST default_locks 00:06:02.508 07:12:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.508 07:12:04 -- common/autotest_common.sh@10 -- # set +x 00:06:02.508 ************************************ 00:06:02.508 07:12:04 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:02.509 07:12:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:02.509 07:12:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:02.509 07:12:04 -- common/autotest_common.sh@10 -- # set +x 00:06:02.509 ************************************ 00:06:02.509 START TEST default_locks_via_rpc 00:06:02.509 ************************************ 00:06:02.509 07:12:04 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:06:02.509 07:12:04 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=69532 00:06:02.509 07:12:04 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.509 07:12:04 -- event/cpu_locks.sh@63 -- # waitforlisten 69532 00:06:02.509 07:12:04 -- common/autotest_common.sh@819 -- # '[' -z 69532 ']' 00:06:02.509 07:12:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.509 07:12:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:02.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.509 07:12:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.509 07:12:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:02.509 07:12:04 -- common/autotest_common.sh@10 -- # set +x 00:06:02.509 [2024-11-04 07:12:04.180520] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:02.509 [2024-11-04 07:12:04.180609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69532 ] 00:06:02.509 [2024-11-04 07:12:04.313554] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.768 [2024-11-04 07:12:04.371983] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:02.768 [2024-11-04 07:12:04.372150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.703 07:12:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:03.703 07:12:05 -- common/autotest_common.sh@852 -- # return 0 00:06:03.703 07:12:05 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:03.703 07:12:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.703 07:12:05 -- common/autotest_common.sh@10 -- # set +x 00:06:03.703 07:12:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.703 07:12:05 -- event/cpu_locks.sh@67 -- # no_locks 00:06:03.703 07:12:05 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:03.703 07:12:05 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:03.703 07:12:05 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:03.703 07:12:05 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:03.703 07:12:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.703 07:12:05 -- common/autotest_common.sh@10 -- # set +x 00:06:03.703 07:12:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.703 07:12:05 -- event/cpu_locks.sh@71 -- # locks_exist 69532 00:06:03.703 07:12:05 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:03.703 07:12:05 -- event/cpu_locks.sh@22 -- # lslocks -p 69532 00:06:03.961 07:12:05 -- event/cpu_locks.sh@73 -- # killprocess 69532 00:06:03.961 07:12:05 -- common/autotest_common.sh@926 -- # '[' -z 69532 ']' 00:06:03.961 07:12:05 -- common/autotest_common.sh@930 -- # kill -0 69532 00:06:03.961 07:12:05 -- common/autotest_common.sh@931 -- # uname 00:06:03.961 07:12:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:03.961 07:12:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69532 00:06:03.961 07:12:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:03.961 killing process with pid 69532 00:06:03.961 07:12:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:03.961 07:12:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69532' 00:06:03.961 07:12:05 -- common/autotest_common.sh@945 -- # kill 69532 00:06:03.961 07:12:05 -- common/autotest_common.sh@950 -- # wait 69532 00:06:04.220 00:06:04.220 real 0m1.893s 00:06:04.220 user 0m2.109s 00:06:04.220 sys 0m0.543s 00:06:04.220 ************************************ 00:06:04.220 END TEST default_locks_via_rpc 00:06:04.220 ************************************ 00:06:04.220 07:12:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.220 07:12:06 -- common/autotest_common.sh@10 -- # set +x 00:06:04.479 07:12:06 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:04.479 07:12:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:04.479 07:12:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:04.479 07:12:06 -- common/autotest_common.sh@10 -- # set +x 00:06:04.479 ************************************ 00:06:04.479 START TEST non_locking_app_on_locked_coremask 00:06:04.479 ************************************ 00:06:04.479 07:12:06 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:06:04.479 07:12:06 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=69601 00:06:04.479 07:12:06 -- event/cpu_locks.sh@81 -- # waitforlisten 69601 /var/tmp/spdk.sock 00:06:04.479 07:12:06 -- common/autotest_common.sh@819 -- # '[' -z 69601 ']' 00:06:04.479 07:12:06 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.479 07:12:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.479 07:12:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:04.479 07:12:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.479 07:12:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:04.479 07:12:06 -- common/autotest_common.sh@10 -- # set +x 00:06:04.479 [2024-11-04 07:12:06.138570] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:04.479 [2024-11-04 07:12:06.138682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69601 ] 00:06:04.479 [2024-11-04 07:12:06.274622] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.739 [2024-11-04 07:12:06.338566] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:04.739 [2024-11-04 07:12:06.338777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.306 07:12:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:05.306 07:12:07 -- common/autotest_common.sh@852 -- # return 0 00:06:05.306 07:12:07 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:05.306 07:12:07 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=69629 00:06:05.306 07:12:07 -- event/cpu_locks.sh@85 -- # waitforlisten 69629 /var/tmp/spdk2.sock 00:06:05.306 07:12:07 -- common/autotest_common.sh@819 -- # '[' -z 69629 ']' 00:06:05.306 07:12:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.306 07:12:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:05.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.306 07:12:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.306 07:12:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:05.306 07:12:07 -- common/autotest_common.sh@10 -- # set +x 00:06:05.306 [2024-11-04 07:12:07.133561] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:05.306 [2024-11-04 07:12:07.133646] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69629 ] 00:06:05.564 [2024-11-04 07:12:07.268930] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:05.564 [2024-11-04 07:12:07.268980] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.823 [2024-11-04 07:12:07.407640] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:05.823 [2024-11-04 07:12:07.407802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.389 07:12:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:06.389 07:12:08 -- common/autotest_common.sh@852 -- # return 0 00:06:06.389 07:12:08 -- event/cpu_locks.sh@87 -- # locks_exist 69601 00:06:06.389 07:12:08 -- event/cpu_locks.sh@22 -- # lslocks -p 69601 00:06:06.389 07:12:08 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:07.325 07:12:08 -- event/cpu_locks.sh@89 -- # killprocess 69601 00:06:07.325 07:12:08 -- common/autotest_common.sh@926 -- # '[' -z 69601 ']' 00:06:07.325 07:12:08 -- common/autotest_common.sh@930 -- # kill -0 69601 00:06:07.325 07:12:08 -- common/autotest_common.sh@931 -- # uname 00:06:07.325 07:12:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:07.325 07:12:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69601 00:06:07.325 killing process with pid 69601 00:06:07.325 07:12:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:07.325 07:12:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:07.325 07:12:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69601' 00:06:07.325 07:12:09 -- common/autotest_common.sh@945 -- # kill 69601 00:06:07.325 07:12:09 -- common/autotest_common.sh@950 -- # wait 69601 00:06:07.892 07:12:09 -- event/cpu_locks.sh@90 -- # killprocess 69629 00:06:07.892 07:12:09 -- common/autotest_common.sh@926 -- # '[' -z 69629 ']' 00:06:07.892 07:12:09 -- common/autotest_common.sh@930 -- # kill -0 69629 00:06:07.892 07:12:09 -- common/autotest_common.sh@931 -- # uname 00:06:07.892 07:12:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:07.892 07:12:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69629 00:06:07.892 07:12:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:07.892 07:12:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:07.892 killing process with pid 69629 00:06:07.892 07:12:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69629' 00:06:07.892 07:12:09 -- common/autotest_common.sh@945 -- # kill 69629 00:06:07.892 07:12:09 -- common/autotest_common.sh@950 -- # wait 69629 00:06:08.465 00:06:08.465 real 0m3.987s 00:06:08.465 user 0m4.477s 00:06:08.465 sys 0m1.091s 00:06:08.465 ************************************ 00:06:08.465 END TEST non_locking_app_on_locked_coremask 00:06:08.465 ************************************ 00:06:08.465 07:12:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.465 07:12:10 -- common/autotest_common.sh@10 -- # set +x 00:06:08.465 07:12:10 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:08.465 07:12:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:08.465 07:12:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:08.465 07:12:10 -- common/autotest_common.sh@10 -- # set +x 00:06:08.465 ************************************ 00:06:08.465 START TEST locking_app_on_unlocked_coremask 00:06:08.465 ************************************ 00:06:08.465 07:12:10 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:06:08.465 07:12:10 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=69708 00:06:08.465 07:12:10 -- event/cpu_locks.sh@99 -- # waitforlisten 69708 /var/tmp/spdk.sock 00:06:08.465 07:12:10 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:08.465 07:12:10 -- common/autotest_common.sh@819 -- # '[' -z 69708 ']' 00:06:08.465 07:12:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.465 07:12:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:08.465 07:12:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.465 07:12:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:08.465 07:12:10 -- common/autotest_common.sh@10 -- # set +x 00:06:08.465 [2024-11-04 07:12:10.162758] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:08.465 [2024-11-04 07:12:10.163005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69708 ] 00:06:08.465 [2024-11-04 07:12:10.286795] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:08.465 [2024-11-04 07:12:10.286838] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.724 [2024-11-04 07:12:10.346197] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:08.724 [2024-11-04 07:12:10.346394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.660 07:12:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:09.660 07:12:11 -- common/autotest_common.sh@852 -- # return 0 00:06:09.660 07:12:11 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=69736 00:06:09.660 07:12:11 -- event/cpu_locks.sh@103 -- # waitforlisten 69736 /var/tmp/spdk2.sock 00:06:09.660 07:12:11 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:09.660 07:12:11 -- common/autotest_common.sh@819 -- # '[' -z 69736 ']' 00:06:09.660 07:12:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.660 07:12:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:09.660 07:12:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.660 07:12:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:09.660 07:12:11 -- common/autotest_common.sh@10 -- # set +x 00:06:09.660 [2024-11-04 07:12:11.208558] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:09.660 [2024-11-04 07:12:11.209600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69736 ] 00:06:09.660 [2024-11-04 07:12:11.348274] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.660 [2024-11-04 07:12:11.464537] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:09.660 [2024-11-04 07:12:11.464686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.595 07:12:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:10.595 07:12:12 -- common/autotest_common.sh@852 -- # return 0 00:06:10.595 07:12:12 -- event/cpu_locks.sh@105 -- # locks_exist 69736 00:06:10.595 07:12:12 -- event/cpu_locks.sh@22 -- # lslocks -p 69736 00:06:10.595 07:12:12 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:11.162 07:12:12 -- event/cpu_locks.sh@107 -- # killprocess 69708 00:06:11.162 07:12:12 -- common/autotest_common.sh@926 -- # '[' -z 69708 ']' 00:06:11.162 07:12:12 -- common/autotest_common.sh@930 -- # kill -0 69708 00:06:11.162 07:12:12 -- common/autotest_common.sh@931 -- # uname 00:06:11.162 07:12:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:11.162 07:12:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69708 00:06:11.163 07:12:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:11.163 07:12:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:11.163 killing process with pid 69708 00:06:11.163 07:12:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69708' 00:06:11.163 07:12:12 -- common/autotest_common.sh@945 -- # kill 69708 00:06:11.163 07:12:12 -- common/autotest_common.sh@950 -- # wait 69708 00:06:12.106 07:12:13 -- event/cpu_locks.sh@108 -- # killprocess 69736 00:06:12.106 07:12:13 -- common/autotest_common.sh@926 -- # '[' -z 69736 ']' 00:06:12.107 07:12:13 -- common/autotest_common.sh@930 -- # kill -0 69736 00:06:12.107 07:12:13 -- common/autotest_common.sh@931 -- # uname 00:06:12.107 07:12:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:12.107 07:12:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69736 00:06:12.107 killing process with pid 69736 00:06:12.107 07:12:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:12.107 07:12:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:12.107 07:12:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69736' 00:06:12.107 07:12:13 -- common/autotest_common.sh@945 -- # kill 69736 00:06:12.107 07:12:13 -- common/autotest_common.sh@950 -- # wait 69736 00:06:12.365 00:06:12.365 real 0m3.873s 00:06:12.365 user 0m4.366s 00:06:12.365 sys 0m1.093s 00:06:12.365 07:12:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.365 07:12:13 -- common/autotest_common.sh@10 -- # set +x 00:06:12.365 ************************************ 00:06:12.365 END TEST locking_app_on_unlocked_coremask 00:06:12.365 ************************************ 00:06:12.365 07:12:14 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:12.365 07:12:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:12.365 07:12:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.365 07:12:14 -- common/autotest_common.sh@10 -- # set +x 00:06:12.365 ************************************ 00:06:12.365 START TEST locking_app_on_locked_coremask 00:06:12.365 ************************************ 00:06:12.365 07:12:14 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:06:12.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.365 07:12:14 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=69815 00:06:12.365 07:12:14 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.365 07:12:14 -- event/cpu_locks.sh@116 -- # waitforlisten 69815 /var/tmp/spdk.sock 00:06:12.365 07:12:14 -- common/autotest_common.sh@819 -- # '[' -z 69815 ']' 00:06:12.365 07:12:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.365 07:12:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:12.365 07:12:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.365 07:12:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:12.365 07:12:14 -- common/autotest_common.sh@10 -- # set +x 00:06:12.365 [2024-11-04 07:12:14.105765] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:12.365 [2024-11-04 07:12:14.105869] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69815 ] 00:06:12.623 [2024-11-04 07:12:14.242899] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.623 [2024-11-04 07:12:14.304786] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:12.623 [2024-11-04 07:12:14.305023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.559 07:12:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:13.559 07:12:15 -- common/autotest_common.sh@852 -- # return 0 00:06:13.559 07:12:15 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=69843 00:06:13.559 07:12:15 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:13.559 07:12:15 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 69843 /var/tmp/spdk2.sock 00:06:13.559 07:12:15 -- common/autotest_common.sh@640 -- # local es=0 00:06:13.559 07:12:15 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 69843 /var/tmp/spdk2.sock 00:06:13.559 07:12:15 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:13.559 07:12:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:13.559 07:12:15 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:13.559 07:12:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:13.559 07:12:15 -- common/autotest_common.sh@643 -- # waitforlisten 69843 /var/tmp/spdk2.sock 00:06:13.559 07:12:15 -- common/autotest_common.sh@819 -- # '[' -z 69843 ']' 00:06:13.559 07:12:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.559 07:12:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:13.559 07:12:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.559 07:12:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:13.559 07:12:15 -- common/autotest_common.sh@10 -- # set +x 00:06:13.559 [2024-11-04 07:12:15.135302] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:13.559 [2024-11-04 07:12:15.135542] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69843 ] 00:06:13.559 [2024-11-04 07:12:15.268519] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 69815 has claimed it. 00:06:13.559 [2024-11-04 07:12:15.268586] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:14.126 ERROR: process (pid: 69843) is no longer running 00:06:14.126 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (69843) - No such process 00:06:14.126 07:12:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:14.126 07:12:15 -- common/autotest_common.sh@852 -- # return 1 00:06:14.126 07:12:15 -- common/autotest_common.sh@643 -- # es=1 00:06:14.126 07:12:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:14.126 07:12:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:14.126 07:12:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:14.126 07:12:15 -- event/cpu_locks.sh@122 -- # locks_exist 69815 00:06:14.126 07:12:15 -- event/cpu_locks.sh@22 -- # lslocks -p 69815 00:06:14.126 07:12:15 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.384 07:12:16 -- event/cpu_locks.sh@124 -- # killprocess 69815 00:06:14.385 07:12:16 -- common/autotest_common.sh@926 -- # '[' -z 69815 ']' 00:06:14.385 07:12:16 -- common/autotest_common.sh@930 -- # kill -0 69815 00:06:14.385 07:12:16 -- common/autotest_common.sh@931 -- # uname 00:06:14.385 07:12:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:14.643 07:12:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69815 00:06:14.643 killing process with pid 69815 00:06:14.643 07:12:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:14.643 07:12:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:14.643 07:12:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69815' 00:06:14.643 07:12:16 -- common/autotest_common.sh@945 -- # kill 69815 00:06:14.643 07:12:16 -- common/autotest_common.sh@950 -- # wait 69815 00:06:14.902 00:06:14.902 real 0m2.546s 00:06:14.902 user 0m2.965s 00:06:14.902 sys 0m0.617s 00:06:14.902 07:12:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.902 07:12:16 -- common/autotest_common.sh@10 -- # set +x 00:06:14.902 ************************************ 00:06:14.902 END TEST locking_app_on_locked_coremask 00:06:14.902 ************************************ 00:06:14.902 07:12:16 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:14.902 07:12:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:14.902 07:12:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:14.902 07:12:16 -- common/autotest_common.sh@10 -- # set +x 00:06:14.902 ************************************ 00:06:14.902 START TEST locking_overlapped_coremask 00:06:14.902 ************************************ 00:06:14.902 07:12:16 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:06:14.902 07:12:16 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=69888 00:06:14.902 07:12:16 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:14.902 07:12:16 -- event/cpu_locks.sh@133 -- # waitforlisten 69888 /var/tmp/spdk.sock 00:06:14.902 07:12:16 -- common/autotest_common.sh@819 -- # '[' -z 69888 ']' 00:06:14.902 07:12:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.902 07:12:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:14.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.902 07:12:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.902 07:12:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:14.902 07:12:16 -- common/autotest_common.sh@10 -- # set +x 00:06:14.902 [2024-11-04 07:12:16.693586] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:14.902 [2024-11-04 07:12:16.693660] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69888 ] 00:06:15.161 [2024-11-04 07:12:16.822911] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:15.161 [2024-11-04 07:12:16.889845] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:15.161 [2024-11-04 07:12:16.890077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.161 [2024-11-04 07:12:16.890374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.161 [2024-11-04 07:12:16.890387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.097 07:12:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:16.097 07:12:17 -- common/autotest_common.sh@852 -- # return 0 00:06:16.097 07:12:17 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:16.097 07:12:17 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=69918 00:06:16.097 07:12:17 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 69918 /var/tmp/spdk2.sock 00:06:16.097 07:12:17 -- common/autotest_common.sh@640 -- # local es=0 00:06:16.097 07:12:17 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 69918 /var/tmp/spdk2.sock 00:06:16.097 07:12:17 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:16.097 07:12:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:16.097 07:12:17 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:16.097 07:12:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:16.097 07:12:17 -- common/autotest_common.sh@643 -- # waitforlisten 69918 /var/tmp/spdk2.sock 00:06:16.097 07:12:17 -- common/autotest_common.sh@819 -- # '[' -z 69918 ']' 00:06:16.097 07:12:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.097 07:12:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:16.097 07:12:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.097 07:12:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:16.097 07:12:17 -- common/autotest_common.sh@10 -- # set +x 00:06:16.097 [2024-11-04 07:12:17.660618] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:16.097 [2024-11-04 07:12:17.660851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69918 ] 00:06:16.097 [2024-11-04 07:12:17.793961] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69888 has claimed it. 00:06:16.097 [2024-11-04 07:12:17.794010] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:16.664 ERROR: process (pid: 69918) is no longer running 00:06:16.664 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (69918) - No such process 00:06:16.664 07:12:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:16.664 07:12:18 -- common/autotest_common.sh@852 -- # return 1 00:06:16.664 07:12:18 -- common/autotest_common.sh@643 -- # es=1 00:06:16.664 07:12:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:16.664 07:12:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:16.664 07:12:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:16.664 07:12:18 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:16.664 07:12:18 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:16.664 07:12:18 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:16.664 07:12:18 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:16.664 07:12:18 -- event/cpu_locks.sh@141 -- # killprocess 69888 00:06:16.664 07:12:18 -- common/autotest_common.sh@926 -- # '[' -z 69888 ']' 00:06:16.664 07:12:18 -- common/autotest_common.sh@930 -- # kill -0 69888 00:06:16.664 07:12:18 -- common/autotest_common.sh@931 -- # uname 00:06:16.664 07:12:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:16.664 07:12:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69888 00:06:16.664 07:12:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:16.664 07:12:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:16.664 07:12:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69888' 00:06:16.664 killing process with pid 69888 00:06:16.664 07:12:18 -- common/autotest_common.sh@945 -- # kill 69888 00:06:16.664 07:12:18 -- common/autotest_common.sh@950 -- # wait 69888 00:06:17.232 00:06:17.232 real 0m2.287s 00:06:17.232 user 0m6.463s 00:06:17.232 sys 0m0.411s 00:06:17.232 07:12:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.232 07:12:18 -- common/autotest_common.sh@10 -- # set +x 00:06:17.232 ************************************ 00:06:17.232 END TEST locking_overlapped_coremask 00:06:17.232 ************************************ 00:06:17.232 07:12:18 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:17.232 07:12:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:17.232 07:12:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:17.232 07:12:18 -- common/autotest_common.sh@10 -- # set +x 00:06:17.232 ************************************ 00:06:17.232 START TEST locking_overlapped_coremask_via_rpc 00:06:17.232 ************************************ 00:06:17.232 07:12:18 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:06:17.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.232 07:12:18 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=69975 00:06:17.232 07:12:18 -- event/cpu_locks.sh@149 -- # waitforlisten 69975 /var/tmp/spdk.sock 00:06:17.232 07:12:18 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:17.232 07:12:18 -- common/autotest_common.sh@819 -- # '[' -z 69975 ']' 00:06:17.232 07:12:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.232 07:12:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:17.232 07:12:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.232 07:12:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:17.232 07:12:18 -- common/autotest_common.sh@10 -- # set +x 00:06:17.232 [2024-11-04 07:12:19.035143] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:17.232 [2024-11-04 07:12:19.035244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69975 ] 00:06:17.490 [2024-11-04 07:12:19.172610] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:17.490 [2024-11-04 07:12:19.172667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:17.490 [2024-11-04 07:12:19.236513] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:17.490 [2024-11-04 07:12:19.237113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.490 [2024-11-04 07:12:19.237300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.490 [2024-11-04 07:12:19.237307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.423 07:12:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:18.423 07:12:20 -- common/autotest_common.sh@852 -- # return 0 00:06:18.423 07:12:20 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:18.423 07:12:20 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=70005 00:06:18.423 07:12:20 -- event/cpu_locks.sh@153 -- # waitforlisten 70005 /var/tmp/spdk2.sock 00:06:18.423 07:12:20 -- common/autotest_common.sh@819 -- # '[' -z 70005 ']' 00:06:18.423 07:12:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.423 07:12:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:18.423 07:12:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.423 07:12:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:18.423 07:12:20 -- common/autotest_common.sh@10 -- # set +x 00:06:18.423 [2024-11-04 07:12:20.083681] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:18.423 [2024-11-04 07:12:20.084118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70005 ] 00:06:18.423 [2024-11-04 07:12:20.218321] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:18.423 [2024-11-04 07:12:20.218379] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:18.681 [2024-11-04 07:12:20.360995] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:18.681 [2024-11-04 07:12:20.362134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:18.681 [2024-11-04 07:12:20.362387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:18.681 [2024-11-04 07:12:20.362391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.248 07:12:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:19.248 07:12:21 -- common/autotest_common.sh@852 -- # return 0 00:06:19.248 07:12:21 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:19.248 07:12:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:19.248 07:12:21 -- common/autotest_common.sh@10 -- # set +x 00:06:19.507 07:12:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:19.507 07:12:21 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:19.507 07:12:21 -- common/autotest_common.sh@640 -- # local es=0 00:06:19.507 07:12:21 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:19.507 07:12:21 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:06:19.507 07:12:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:19.507 07:12:21 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:06:19.507 07:12:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:19.507 07:12:21 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:19.507 07:12:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:19.507 07:12:21 -- common/autotest_common.sh@10 -- # set +x 00:06:19.507 [2024-11-04 07:12:21.107013] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69975 has claimed it. 00:06:19.507 2024/11/04 07:12:21 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:06:19.507 request: 00:06:19.507 { 00:06:19.507 "method": "framework_enable_cpumask_locks", 00:06:19.507 "params": {} 00:06:19.507 } 00:06:19.507 Got JSON-RPC error response 00:06:19.507 GoRPCClient: error on JSON-RPC call 00:06:19.507 07:12:21 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:06:19.507 07:12:21 -- common/autotest_common.sh@643 -- # es=1 00:06:19.507 07:12:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:19.507 07:12:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:19.507 07:12:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:19.507 07:12:21 -- event/cpu_locks.sh@158 -- # waitforlisten 69975 /var/tmp/spdk.sock 00:06:19.507 07:12:21 -- common/autotest_common.sh@819 -- # '[' -z 69975 ']' 00:06:19.507 07:12:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.507 07:12:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:19.507 07:12:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.507 07:12:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:19.507 07:12:21 -- common/autotest_common.sh@10 -- # set +x 00:06:19.766 07:12:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:19.766 07:12:21 -- common/autotest_common.sh@852 -- # return 0 00:06:19.766 07:12:21 -- event/cpu_locks.sh@159 -- # waitforlisten 70005 /var/tmp/spdk2.sock 00:06:19.766 07:12:21 -- common/autotest_common.sh@819 -- # '[' -z 70005 ']' 00:06:19.766 07:12:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.766 07:12:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:19.766 07:12:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.766 07:12:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:19.766 07:12:21 -- common/autotest_common.sh@10 -- # set +x 00:06:20.025 07:12:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:20.025 ************************************ 00:06:20.025 END TEST locking_overlapped_coremask_via_rpc 00:06:20.025 ************************************ 00:06:20.025 07:12:21 -- common/autotest_common.sh@852 -- # return 0 00:06:20.025 07:12:21 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:20.025 07:12:21 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:20.025 07:12:21 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:20.025 07:12:21 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:20.025 00:06:20.025 real 0m2.636s 00:06:20.025 user 0m1.352s 00:06:20.025 sys 0m0.205s 00:06:20.025 07:12:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.025 07:12:21 -- common/autotest_common.sh@10 -- # set +x 00:06:20.025 07:12:21 -- event/cpu_locks.sh@174 -- # cleanup 00:06:20.025 07:12:21 -- event/cpu_locks.sh@15 -- # [[ -z 69975 ]] 00:06:20.025 07:12:21 -- event/cpu_locks.sh@15 -- # killprocess 69975 00:06:20.025 07:12:21 -- common/autotest_common.sh@926 -- # '[' -z 69975 ']' 00:06:20.025 07:12:21 -- common/autotest_common.sh@930 -- # kill -0 69975 00:06:20.025 07:12:21 -- common/autotest_common.sh@931 -- # uname 00:06:20.025 07:12:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:20.025 07:12:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69975 00:06:20.025 killing process with pid 69975 00:06:20.025 07:12:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:20.025 07:12:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:20.025 07:12:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69975' 00:06:20.025 07:12:21 -- common/autotest_common.sh@945 -- # kill 69975 00:06:20.025 07:12:21 -- common/autotest_common.sh@950 -- # wait 69975 00:06:20.592 07:12:22 -- event/cpu_locks.sh@16 -- # [[ -z 70005 ]] 00:06:20.592 07:12:22 -- event/cpu_locks.sh@16 -- # killprocess 70005 00:06:20.592 07:12:22 -- common/autotest_common.sh@926 -- # '[' -z 70005 ']' 00:06:20.592 07:12:22 -- common/autotest_common.sh@930 -- # kill -0 70005 00:06:20.592 07:12:22 -- common/autotest_common.sh@931 -- # uname 00:06:20.592 07:12:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:20.592 07:12:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70005 00:06:20.592 killing process with pid 70005 00:06:20.592 07:12:22 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:20.592 07:12:22 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:20.592 07:12:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70005' 00:06:20.592 07:12:22 -- common/autotest_common.sh@945 -- # kill 70005 00:06:20.592 07:12:22 -- common/autotest_common.sh@950 -- # wait 70005 00:06:20.850 07:12:22 -- event/cpu_locks.sh@18 -- # rm -f 00:06:20.850 07:12:22 -- event/cpu_locks.sh@1 -- # cleanup 00:06:20.850 07:12:22 -- event/cpu_locks.sh@15 -- # [[ -z 69975 ]] 00:06:20.850 07:12:22 -- event/cpu_locks.sh@15 -- # killprocess 69975 00:06:20.850 07:12:22 -- common/autotest_common.sh@926 -- # '[' -z 69975 ']' 00:06:20.850 07:12:22 -- common/autotest_common.sh@930 -- # kill -0 69975 00:06:20.850 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (69975) - No such process 00:06:20.850 Process with pid 69975 is not found 00:06:20.850 07:12:22 -- common/autotest_common.sh@953 -- # echo 'Process with pid 69975 is not found' 00:06:20.850 07:12:22 -- event/cpu_locks.sh@16 -- # [[ -z 70005 ]] 00:06:20.850 Process with pid 70005 is not found 00:06:20.850 07:12:22 -- event/cpu_locks.sh@16 -- # killprocess 70005 00:06:20.850 07:12:22 -- common/autotest_common.sh@926 -- # '[' -z 70005 ']' 00:06:20.850 07:12:22 -- common/autotest_common.sh@930 -- # kill -0 70005 00:06:20.850 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (70005) - No such process 00:06:20.850 07:12:22 -- common/autotest_common.sh@953 -- # echo 'Process with pid 70005 is not found' 00:06:20.850 07:12:22 -- event/cpu_locks.sh@18 -- # rm -f 00:06:21.109 ************************************ 00:06:21.109 END TEST cpu_locks 00:06:21.109 ************************************ 00:06:21.109 00:06:21.109 real 0m20.298s 00:06:21.109 user 0m36.700s 00:06:21.109 sys 0m5.426s 00:06:21.109 07:12:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.109 07:12:22 -- common/autotest_common.sh@10 -- # set +x 00:06:21.109 ************************************ 00:06:21.109 END TEST event 00:06:21.109 ************************************ 00:06:21.109 00:06:21.109 real 0m47.390s 00:06:21.109 user 1m31.663s 00:06:21.109 sys 0m8.971s 00:06:21.109 07:12:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.109 07:12:22 -- common/autotest_common.sh@10 -- # set +x 00:06:21.109 07:12:22 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:21.109 07:12:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:21.109 07:12:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:21.109 07:12:22 -- common/autotest_common.sh@10 -- # set +x 00:06:21.109 ************************************ 00:06:21.109 START TEST thread 00:06:21.109 ************************************ 00:06:21.109 07:12:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:21.109 * Looking for test storage... 00:06:21.109 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:21.109 07:12:22 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:21.109 07:12:22 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:21.109 07:12:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:21.109 07:12:22 -- common/autotest_common.sh@10 -- # set +x 00:06:21.109 ************************************ 00:06:21.109 START TEST thread_poller_perf 00:06:21.109 ************************************ 00:06:21.109 07:12:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:21.109 [2024-11-04 07:12:22.885236] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:21.109 [2024-11-04 07:12:22.885314] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70145 ] 00:06:21.367 [2024-11-04 07:12:23.016355] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.367 [2024-11-04 07:12:23.087087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.367 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:22.741 [2024-11-04T07:12:24.582Z] ====================================== 00:06:22.741 [2024-11-04T07:12:24.582Z] busy:2207416080 (cyc) 00:06:22.741 [2024-11-04T07:12:24.582Z] total_run_count: 389000 00:06:22.741 [2024-11-04T07:12:24.582Z] tsc_hz: 2200000000 (cyc) 00:06:22.741 [2024-11-04T07:12:24.582Z] ====================================== 00:06:22.741 [2024-11-04T07:12:24.582Z] poller_cost: 5674 (cyc), 2579 (nsec) 00:06:22.741 00:06:22.741 real 0m1.326s 00:06:22.741 user 0m1.153s 00:06:22.741 sys 0m0.063s 00:06:22.741 07:12:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.741 ************************************ 00:06:22.741 END TEST thread_poller_perf 00:06:22.741 ************************************ 00:06:22.741 07:12:24 -- common/autotest_common.sh@10 -- # set +x 00:06:22.741 07:12:24 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:22.741 07:12:24 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:22.741 07:12:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:22.741 07:12:24 -- common/autotest_common.sh@10 -- # set +x 00:06:22.741 ************************************ 00:06:22.741 START TEST thread_poller_perf 00:06:22.741 ************************************ 00:06:22.741 07:12:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:22.741 [2024-11-04 07:12:24.270236] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:22.741 [2024-11-04 07:12:24.270375] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70186 ] 00:06:22.741 [2024-11-04 07:12:24.407893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.741 [2024-11-04 07:12:24.476926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.741 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:24.147 [2024-11-04T07:12:25.988Z] ====================================== 00:06:24.147 [2024-11-04T07:12:25.988Z] busy:2202744920 (cyc) 00:06:24.147 [2024-11-04T07:12:25.988Z] total_run_count: 5049000 00:06:24.147 [2024-11-04T07:12:25.988Z] tsc_hz: 2200000000 (cyc) 00:06:24.147 [2024-11-04T07:12:25.988Z] ====================================== 00:06:24.147 [2024-11-04T07:12:25.988Z] poller_cost: 436 (cyc), 198 (nsec) 00:06:24.147 ************************************ 00:06:24.147 END TEST thread_poller_perf 00:06:24.147 ************************************ 00:06:24.147 00:06:24.147 real 0m1.313s 00:06:24.147 user 0m1.149s 00:06:24.147 sys 0m0.055s 00:06:24.147 07:12:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.147 07:12:25 -- common/autotest_common.sh@10 -- # set +x 00:06:24.147 07:12:25 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:24.147 ************************************ 00:06:24.147 END TEST thread 00:06:24.147 ************************************ 00:06:24.147 00:06:24.147 real 0m2.827s 00:06:24.147 user 0m2.363s 00:06:24.147 sys 0m0.241s 00:06:24.147 07:12:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.147 07:12:25 -- common/autotest_common.sh@10 -- # set +x 00:06:24.147 07:12:25 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:24.147 07:12:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:24.147 07:12:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:24.147 07:12:25 -- common/autotest_common.sh@10 -- # set +x 00:06:24.147 ************************************ 00:06:24.147 START TEST accel 00:06:24.147 ************************************ 00:06:24.147 07:12:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:24.147 * Looking for test storage... 00:06:24.147 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:24.147 07:12:25 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:24.147 07:12:25 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:24.148 07:12:25 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:24.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.148 07:12:25 -- accel/accel.sh@59 -- # spdk_tgt_pid=70254 00:06:24.148 07:12:25 -- accel/accel.sh@60 -- # waitforlisten 70254 00:06:24.148 07:12:25 -- common/autotest_common.sh@819 -- # '[' -z 70254 ']' 00:06:24.148 07:12:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.148 07:12:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:24.148 07:12:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.148 07:12:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:24.148 07:12:25 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:24.148 07:12:25 -- accel/accel.sh@58 -- # build_accel_config 00:06:24.148 07:12:25 -- common/autotest_common.sh@10 -- # set +x 00:06:24.148 07:12:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.148 07:12:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.148 07:12:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.148 07:12:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.148 07:12:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.148 07:12:25 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.148 07:12:25 -- accel/accel.sh@42 -- # jq -r . 00:06:24.148 [2024-11-04 07:12:25.809684] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:24.148 [2024-11-04 07:12:25.809997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70254 ] 00:06:24.148 [2024-11-04 07:12:25.948502] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.406 [2024-11-04 07:12:26.022752] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:24.406 [2024-11-04 07:12:26.023247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.341 07:12:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:25.341 07:12:26 -- common/autotest_common.sh@852 -- # return 0 00:06:25.341 07:12:26 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:25.341 07:12:26 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:25.341 07:12:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:25.341 07:12:26 -- common/autotest_common.sh@10 -- # set +x 00:06:25.341 07:12:26 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:25.341 07:12:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:25.341 07:12:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.341 07:12:26 -- accel/accel.sh@64 -- # IFS== 00:06:25.341 07:12:26 -- accel/accel.sh@64 -- # read -r opc module 00:06:25.341 07:12:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:25.341 07:12:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.341 07:12:26 -- accel/accel.sh@64 -- # IFS== 00:06:25.341 07:12:26 -- accel/accel.sh@64 -- # read -r opc module 00:06:25.341 07:12:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:25.341 07:12:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.341 07:12:26 -- accel/accel.sh@64 -- # IFS== 00:06:25.341 07:12:26 -- accel/accel.sh@64 -- # read -r opc module 00:06:25.341 07:12:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:25.341 07:12:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.341 07:12:26 -- accel/accel.sh@64 -- # IFS== 00:06:25.341 07:12:26 -- accel/accel.sh@64 -- # read -r opc module 00:06:25.341 07:12:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:25.341 07:12:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.341 07:12:26 -- accel/accel.sh@64 -- # IFS== 00:06:25.341 07:12:26 -- accel/accel.sh@64 -- # read -r opc module 00:06:25.341 07:12:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:25.341 07:12:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.341 07:12:26 -- accel/accel.sh@64 -- # IFS== 00:06:25.341 07:12:26 -- accel/accel.sh@64 -- # read -r opc module 00:06:25.341 07:12:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:25.341 07:12:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.341 07:12:26 -- accel/accel.sh@64 -- # IFS== 00:06:25.341 07:12:26 -- accel/accel.sh@64 -- # read -r opc module 00:06:25.341 07:12:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:25.341 07:12:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.341 07:12:26 -- accel/accel.sh@64 -- # IFS== 00:06:25.341 07:12:26 -- accel/accel.sh@64 -- # read -r opc module 00:06:25.341 07:12:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:25.341 07:12:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.341 07:12:26 -- accel/accel.sh@64 -- # IFS== 00:06:25.341 07:12:26 -- accel/accel.sh@64 -- # read -r opc module 00:06:25.341 07:12:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:25.341 07:12:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.341 07:12:26 -- accel/accel.sh@64 -- # IFS== 00:06:25.341 07:12:26 -- accel/accel.sh@64 -- # read -r opc module 00:06:25.341 07:12:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:25.341 07:12:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.341 07:12:26 -- accel/accel.sh@64 -- # IFS== 00:06:25.341 07:12:26 -- accel/accel.sh@64 -- # read -r opc module 00:06:25.341 07:12:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:25.341 07:12:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.341 07:12:26 -- accel/accel.sh@64 -- # IFS== 00:06:25.341 07:12:26 -- accel/accel.sh@64 -- # read -r opc module 00:06:25.341 07:12:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:25.341 07:12:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.341 07:12:26 -- accel/accel.sh@64 -- # IFS== 00:06:25.341 07:12:26 -- accel/accel.sh@64 -- # read -r opc module 00:06:25.341 07:12:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:25.341 07:12:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.341 07:12:26 -- accel/accel.sh@64 -- # IFS== 00:06:25.341 07:12:26 -- accel/accel.sh@64 -- # read -r opc module 00:06:25.341 07:12:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:25.341 07:12:26 -- accel/accel.sh@67 -- # killprocess 70254 00:06:25.341 07:12:26 -- common/autotest_common.sh@926 -- # '[' -z 70254 ']' 00:06:25.341 07:12:26 -- common/autotest_common.sh@930 -- # kill -0 70254 00:06:25.341 07:12:26 -- common/autotest_common.sh@931 -- # uname 00:06:25.341 07:12:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:25.341 07:12:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70254 00:06:25.341 killing process with pid 70254 00:06:25.341 07:12:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:25.341 07:12:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:25.341 07:12:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70254' 00:06:25.341 07:12:26 -- common/autotest_common.sh@945 -- # kill 70254 00:06:25.341 07:12:26 -- common/autotest_common.sh@950 -- # wait 70254 00:06:25.600 07:12:27 -- accel/accel.sh@68 -- # trap - ERR 00:06:25.600 07:12:27 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:25.600 07:12:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:25.600 07:12:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:25.600 07:12:27 -- common/autotest_common.sh@10 -- # set +x 00:06:25.600 07:12:27 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:06:25.600 07:12:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:25.600 07:12:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.600 07:12:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.600 07:12:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.600 07:12:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.600 07:12:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.600 07:12:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.600 07:12:27 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.600 07:12:27 -- accel/accel.sh@42 -- # jq -r . 00:06:25.600 07:12:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.600 07:12:27 -- common/autotest_common.sh@10 -- # set +x 00:06:25.600 07:12:27 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:25.600 07:12:27 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:25.600 07:12:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:25.600 07:12:27 -- common/autotest_common.sh@10 -- # set +x 00:06:25.600 ************************************ 00:06:25.600 START TEST accel_missing_filename 00:06:25.600 ************************************ 00:06:25.600 07:12:27 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:06:25.600 07:12:27 -- common/autotest_common.sh@640 -- # local es=0 00:06:25.600 07:12:27 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:25.600 07:12:27 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:25.600 07:12:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:25.600 07:12:27 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:25.600 07:12:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:25.600 07:12:27 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:06:25.600 07:12:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:25.600 07:12:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.600 07:12:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.600 07:12:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.600 07:12:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.600 07:12:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.600 07:12:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.600 07:12:27 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.600 07:12:27 -- accel/accel.sh@42 -- # jq -r . 00:06:25.859 [2024-11-04 07:12:27.448437] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:25.859 [2024-11-04 07:12:27.448552] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70329 ] 00:06:25.859 [2024-11-04 07:12:27.584227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.859 [2024-11-04 07:12:27.639351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.859 [2024-11-04 07:12:27.692543] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:26.117 [2024-11-04 07:12:27.767278] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:26.118 A filename is required. 00:06:26.118 07:12:27 -- common/autotest_common.sh@643 -- # es=234 00:06:26.118 07:12:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:26.118 07:12:27 -- common/autotest_common.sh@652 -- # es=106 00:06:26.118 ************************************ 00:06:26.118 END TEST accel_missing_filename 00:06:26.118 ************************************ 00:06:26.118 07:12:27 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:26.118 07:12:27 -- common/autotest_common.sh@660 -- # es=1 00:06:26.118 07:12:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:26.118 00:06:26.118 real 0m0.401s 00:06:26.118 user 0m0.233s 00:06:26.118 sys 0m0.116s 00:06:26.118 07:12:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.118 07:12:27 -- common/autotest_common.sh@10 -- # set +x 00:06:26.118 07:12:27 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:26.118 07:12:27 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:26.118 07:12:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:26.118 07:12:27 -- common/autotest_common.sh@10 -- # set +x 00:06:26.118 ************************************ 00:06:26.118 START TEST accel_compress_verify 00:06:26.118 ************************************ 00:06:26.118 07:12:27 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:26.118 07:12:27 -- common/autotest_common.sh@640 -- # local es=0 00:06:26.118 07:12:27 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:26.118 07:12:27 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:26.118 07:12:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:26.118 07:12:27 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:26.118 07:12:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:26.118 07:12:27 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:26.118 07:12:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:26.118 07:12:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.118 07:12:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.118 07:12:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.118 07:12:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.118 07:12:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.118 07:12:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.118 07:12:27 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.118 07:12:27 -- accel/accel.sh@42 -- # jq -r . 00:06:26.118 [2024-11-04 07:12:27.895696] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:26.118 [2024-11-04 07:12:27.895792] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70348 ] 00:06:26.376 [2024-11-04 07:12:28.034425] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.377 [2024-11-04 07:12:28.102235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.377 [2024-11-04 07:12:28.161741] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:26.635 [2024-11-04 07:12:28.235018] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:26.635 00:06:26.635 Compression does not support the verify option, aborting. 00:06:26.636 ************************************ 00:06:26.636 END TEST accel_compress_verify 00:06:26.636 ************************************ 00:06:26.636 07:12:28 -- common/autotest_common.sh@643 -- # es=161 00:06:26.636 07:12:28 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:26.636 07:12:28 -- common/autotest_common.sh@652 -- # es=33 00:06:26.636 07:12:28 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:26.636 07:12:28 -- common/autotest_common.sh@660 -- # es=1 00:06:26.636 07:12:28 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:26.636 00:06:26.636 real 0m0.419s 00:06:26.636 user 0m0.254s 00:06:26.636 sys 0m0.112s 00:06:26.636 07:12:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.636 07:12:28 -- common/autotest_common.sh@10 -- # set +x 00:06:26.636 07:12:28 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:26.636 07:12:28 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:26.636 07:12:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:26.636 07:12:28 -- common/autotest_common.sh@10 -- # set +x 00:06:26.636 ************************************ 00:06:26.636 START TEST accel_wrong_workload 00:06:26.636 ************************************ 00:06:26.636 07:12:28 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:06:26.636 07:12:28 -- common/autotest_common.sh@640 -- # local es=0 00:06:26.636 07:12:28 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:26.636 07:12:28 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:26.636 07:12:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:26.636 07:12:28 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:26.636 07:12:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:26.636 07:12:28 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:06:26.636 07:12:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:26.636 07:12:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.636 07:12:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.636 07:12:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.636 07:12:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.636 07:12:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.636 07:12:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.636 07:12:28 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.636 07:12:28 -- accel/accel.sh@42 -- # jq -r . 00:06:26.636 Unsupported workload type: foobar 00:06:26.636 [2024-11-04 07:12:28.364300] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:26.636 accel_perf options: 00:06:26.636 [-h help message] 00:06:26.636 [-q queue depth per core] 00:06:26.636 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:26.636 [-T number of threads per core 00:06:26.636 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:26.636 [-t time in seconds] 00:06:26.636 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:26.636 [ dif_verify, , dif_generate, dif_generate_copy 00:06:26.636 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:26.636 [-l for compress/decompress workloads, name of uncompressed input file 00:06:26.636 [-S for crc32c workload, use this seed value (default 0) 00:06:26.636 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:26.636 [-f for fill workload, use this BYTE value (default 255) 00:06:26.636 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:26.636 [-y verify result if this switch is on] 00:06:26.636 [-a tasks to allocate per core (default: same value as -q)] 00:06:26.636 Can be used to spread operations across a wider range of memory. 00:06:26.636 07:12:28 -- common/autotest_common.sh@643 -- # es=1 00:06:26.636 07:12:28 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:26.636 07:12:28 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:26.636 ************************************ 00:06:26.636 END TEST accel_wrong_workload 00:06:26.636 ************************************ 00:06:26.636 07:12:28 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:26.636 00:06:26.636 real 0m0.030s 00:06:26.636 user 0m0.015s 00:06:26.636 sys 0m0.015s 00:06:26.636 07:12:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.636 07:12:28 -- common/autotest_common.sh@10 -- # set +x 00:06:26.636 07:12:28 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:26.636 07:12:28 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:26.636 07:12:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:26.636 07:12:28 -- common/autotest_common.sh@10 -- # set +x 00:06:26.636 ************************************ 00:06:26.636 START TEST accel_negative_buffers 00:06:26.636 ************************************ 00:06:26.636 07:12:28 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:26.636 07:12:28 -- common/autotest_common.sh@640 -- # local es=0 00:06:26.636 07:12:28 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:26.636 07:12:28 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:26.636 07:12:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:26.636 07:12:28 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:26.636 07:12:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:26.636 07:12:28 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:06:26.636 07:12:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:26.636 07:12:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.636 07:12:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.636 07:12:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.636 07:12:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.636 07:12:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.636 07:12:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.636 07:12:28 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.636 07:12:28 -- accel/accel.sh@42 -- # jq -r . 00:06:26.636 -x option must be non-negative. 00:06:26.636 [2024-11-04 07:12:28.435802] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:26.636 accel_perf options: 00:06:26.636 [-h help message] 00:06:26.636 [-q queue depth per core] 00:06:26.637 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:26.637 [-T number of threads per core 00:06:26.637 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:26.637 [-t time in seconds] 00:06:26.637 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:26.637 [ dif_verify, , dif_generate, dif_generate_copy 00:06:26.637 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:26.637 [-l for compress/decompress workloads, name of uncompressed input file 00:06:26.637 [-S for crc32c workload, use this seed value (default 0) 00:06:26.637 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:26.637 [-f for fill workload, use this BYTE value (default 255) 00:06:26.637 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:26.637 [-y verify result if this switch is on] 00:06:26.637 [-a tasks to allocate per core (default: same value as -q)] 00:06:26.637 Can be used to spread operations across a wider range of memory. 00:06:26.637 07:12:28 -- common/autotest_common.sh@643 -- # es=1 00:06:26.637 ************************************ 00:06:26.637 END TEST accel_negative_buffers 00:06:26.637 ************************************ 00:06:26.637 07:12:28 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:26.637 07:12:28 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:26.637 07:12:28 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:26.637 00:06:26.637 real 0m0.021s 00:06:26.637 user 0m0.011s 00:06:26.637 sys 0m0.010s 00:06:26.637 07:12:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.637 07:12:28 -- common/autotest_common.sh@10 -- # set +x 00:06:26.896 07:12:28 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:26.896 07:12:28 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:26.896 07:12:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:26.896 07:12:28 -- common/autotest_common.sh@10 -- # set +x 00:06:26.896 ************************************ 00:06:26.896 START TEST accel_crc32c 00:06:26.896 ************************************ 00:06:26.896 07:12:28 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:26.896 07:12:28 -- accel/accel.sh@16 -- # local accel_opc 00:06:26.896 07:12:28 -- accel/accel.sh@17 -- # local accel_module 00:06:26.896 07:12:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:26.896 07:12:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:26.896 07:12:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.896 07:12:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.896 07:12:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.896 07:12:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.896 07:12:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.896 07:12:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.896 07:12:28 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.896 07:12:28 -- accel/accel.sh@42 -- # jq -r . 00:06:26.896 [2024-11-04 07:12:28.513565] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:26.896 [2024-11-04 07:12:28.513652] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70412 ] 00:06:26.896 [2024-11-04 07:12:28.650336] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.896 [2024-11-04 07:12:28.706302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.272 07:12:29 -- accel/accel.sh@18 -- # out=' 00:06:28.272 SPDK Configuration: 00:06:28.272 Core mask: 0x1 00:06:28.272 00:06:28.272 Accel Perf Configuration: 00:06:28.272 Workload Type: crc32c 00:06:28.272 CRC-32C seed: 32 00:06:28.272 Transfer size: 4096 bytes 00:06:28.272 Vector count 1 00:06:28.272 Module: software 00:06:28.272 Queue depth: 32 00:06:28.272 Allocate depth: 32 00:06:28.272 # threads/core: 1 00:06:28.272 Run time: 1 seconds 00:06:28.272 Verify: Yes 00:06:28.272 00:06:28.272 Running for 1 seconds... 00:06:28.272 00:06:28.272 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:28.272 ------------------------------------------------------------------------------------ 00:06:28.272 0,0 567456/s 2216 MiB/s 0 0 00:06:28.272 ==================================================================================== 00:06:28.272 Total 567456/s 2216 MiB/s 0 0' 00:06:28.272 07:12:29 -- accel/accel.sh@20 -- # IFS=: 00:06:28.272 07:12:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:28.272 07:12:29 -- accel/accel.sh@20 -- # read -r var val 00:06:28.272 07:12:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:28.272 07:12:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.272 07:12:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:28.272 07:12:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.272 07:12:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.272 07:12:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:28.272 07:12:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:28.272 07:12:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:28.272 07:12:29 -- accel/accel.sh@42 -- # jq -r . 00:06:28.272 [2024-11-04 07:12:29.911632] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:28.272 [2024-11-04 07:12:29.911727] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70428 ] 00:06:28.272 [2024-11-04 07:12:30.047796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.272 [2024-11-04 07:12:30.104320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.531 07:12:30 -- accel/accel.sh@21 -- # val= 00:06:28.531 07:12:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.531 07:12:30 -- accel/accel.sh@20 -- # IFS=: 00:06:28.531 07:12:30 -- accel/accel.sh@20 -- # read -r var val 00:06:28.531 07:12:30 -- accel/accel.sh@21 -- # val= 00:06:28.531 07:12:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.531 07:12:30 -- accel/accel.sh@20 -- # IFS=: 00:06:28.531 07:12:30 -- accel/accel.sh@20 -- # read -r var val 00:06:28.531 07:12:30 -- accel/accel.sh@21 -- # val=0x1 00:06:28.531 07:12:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.531 07:12:30 -- accel/accel.sh@20 -- # IFS=: 00:06:28.531 07:12:30 -- accel/accel.sh@20 -- # read -r var val 00:06:28.531 07:12:30 -- accel/accel.sh@21 -- # val= 00:06:28.531 07:12:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.531 07:12:30 -- accel/accel.sh@20 -- # IFS=: 00:06:28.531 07:12:30 -- accel/accel.sh@20 -- # read -r var val 00:06:28.531 07:12:30 -- accel/accel.sh@21 -- # val= 00:06:28.531 07:12:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.531 07:12:30 -- accel/accel.sh@20 -- # IFS=: 00:06:28.531 07:12:30 -- accel/accel.sh@20 -- # read -r var val 00:06:28.531 07:12:30 -- accel/accel.sh@21 -- # val=crc32c 00:06:28.531 07:12:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.531 07:12:30 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:28.531 07:12:30 -- accel/accel.sh@20 -- # IFS=: 00:06:28.531 07:12:30 -- accel/accel.sh@20 -- # read -r var val 00:06:28.531 07:12:30 -- accel/accel.sh@21 -- # val=32 00:06:28.531 07:12:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.531 07:12:30 -- accel/accel.sh@20 -- # IFS=: 00:06:28.531 07:12:30 -- accel/accel.sh@20 -- # read -r var val 00:06:28.531 07:12:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:28.531 07:12:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.531 07:12:30 -- accel/accel.sh@20 -- # IFS=: 00:06:28.531 07:12:30 -- accel/accel.sh@20 -- # read -r var val 00:06:28.531 07:12:30 -- accel/accel.sh@21 -- # val= 00:06:28.531 07:12:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.531 07:12:30 -- accel/accel.sh@20 -- # IFS=: 00:06:28.531 07:12:30 -- accel/accel.sh@20 -- # read -r var val 00:06:28.531 07:12:30 -- accel/accel.sh@21 -- # val=software 00:06:28.531 07:12:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.531 07:12:30 -- accel/accel.sh@23 -- # accel_module=software 00:06:28.531 07:12:30 -- accel/accel.sh@20 -- # IFS=: 00:06:28.531 07:12:30 -- accel/accel.sh@20 -- # read -r var val 00:06:28.531 07:12:30 -- accel/accel.sh@21 -- # val=32 00:06:28.531 07:12:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.531 07:12:30 -- accel/accel.sh@20 -- # IFS=: 00:06:28.531 07:12:30 -- accel/accel.sh@20 -- # read -r var val 00:06:28.531 07:12:30 -- accel/accel.sh@21 -- # val=32 00:06:28.531 07:12:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.531 07:12:30 -- accel/accel.sh@20 -- # IFS=: 00:06:28.531 07:12:30 -- accel/accel.sh@20 -- # read -r var val 00:06:28.531 07:12:30 -- accel/accel.sh@21 -- # val=1 00:06:28.531 07:12:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.531 07:12:30 -- accel/accel.sh@20 -- # IFS=: 00:06:28.531 07:12:30 -- accel/accel.sh@20 -- # read -r var val 00:06:28.531 07:12:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:28.531 07:12:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.531 07:12:30 -- accel/accel.sh@20 -- # IFS=: 00:06:28.531 07:12:30 -- accel/accel.sh@20 -- # read -r var val 00:06:28.531 07:12:30 -- accel/accel.sh@21 -- # val=Yes 00:06:28.531 07:12:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.531 07:12:30 -- accel/accel.sh@20 -- # IFS=: 00:06:28.531 07:12:30 -- accel/accel.sh@20 -- # read -r var val 00:06:28.531 07:12:30 -- accel/accel.sh@21 -- # val= 00:06:28.531 07:12:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.531 07:12:30 -- accel/accel.sh@20 -- # IFS=: 00:06:28.531 07:12:30 -- accel/accel.sh@20 -- # read -r var val 00:06:28.532 07:12:30 -- accel/accel.sh@21 -- # val= 00:06:28.532 07:12:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.532 07:12:30 -- accel/accel.sh@20 -- # IFS=: 00:06:28.532 07:12:30 -- accel/accel.sh@20 -- # read -r var val 00:06:29.467 07:12:31 -- accel/accel.sh@21 -- # val= 00:06:29.467 07:12:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.467 07:12:31 -- accel/accel.sh@20 -- # IFS=: 00:06:29.467 07:12:31 -- accel/accel.sh@20 -- # read -r var val 00:06:29.467 07:12:31 -- accel/accel.sh@21 -- # val= 00:06:29.467 07:12:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.467 07:12:31 -- accel/accel.sh@20 -- # IFS=: 00:06:29.467 07:12:31 -- accel/accel.sh@20 -- # read -r var val 00:06:29.467 07:12:31 -- accel/accel.sh@21 -- # val= 00:06:29.467 07:12:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.467 07:12:31 -- accel/accel.sh@20 -- # IFS=: 00:06:29.467 07:12:31 -- accel/accel.sh@20 -- # read -r var val 00:06:29.467 07:12:31 -- accel/accel.sh@21 -- # val= 00:06:29.467 07:12:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.467 07:12:31 -- accel/accel.sh@20 -- # IFS=: 00:06:29.467 07:12:31 -- accel/accel.sh@20 -- # read -r var val 00:06:29.467 07:12:31 -- accel/accel.sh@21 -- # val= 00:06:29.467 07:12:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.467 07:12:31 -- accel/accel.sh@20 -- # IFS=: 00:06:29.467 07:12:31 -- accel/accel.sh@20 -- # read -r var val 00:06:29.467 07:12:31 -- accel/accel.sh@21 -- # val= 00:06:29.467 07:12:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.467 07:12:31 -- accel/accel.sh@20 -- # IFS=: 00:06:29.467 07:12:31 -- accel/accel.sh@20 -- # read -r var val 00:06:29.467 07:12:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:29.467 07:12:31 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:29.467 07:12:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.467 00:06:29.467 real 0m2.812s 00:06:29.467 user 0m2.393s 00:06:29.467 sys 0m0.220s 00:06:29.467 ************************************ 00:06:29.467 END TEST accel_crc32c 00:06:29.467 ************************************ 00:06:29.467 07:12:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.467 07:12:31 -- common/autotest_common.sh@10 -- # set +x 00:06:29.726 07:12:31 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:29.726 07:12:31 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:29.726 07:12:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:29.726 07:12:31 -- common/autotest_common.sh@10 -- # set +x 00:06:29.726 ************************************ 00:06:29.726 START TEST accel_crc32c_C2 00:06:29.726 ************************************ 00:06:29.726 07:12:31 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:29.726 07:12:31 -- accel/accel.sh@16 -- # local accel_opc 00:06:29.726 07:12:31 -- accel/accel.sh@17 -- # local accel_module 00:06:29.726 07:12:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:29.726 07:12:31 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.726 07:12:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:29.726 07:12:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.726 07:12:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.726 07:12:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.726 07:12:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.726 07:12:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.726 07:12:31 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.726 07:12:31 -- accel/accel.sh@42 -- # jq -r . 00:06:29.726 [2024-11-04 07:12:31.381388] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:29.726 [2024-11-04 07:12:31.381484] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70468 ] 00:06:29.726 [2024-11-04 07:12:31.517364] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.985 [2024-11-04 07:12:31.572625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.919 07:12:32 -- accel/accel.sh@18 -- # out=' 00:06:30.919 SPDK Configuration: 00:06:30.919 Core mask: 0x1 00:06:30.919 00:06:30.919 Accel Perf Configuration: 00:06:30.919 Workload Type: crc32c 00:06:30.919 CRC-32C seed: 0 00:06:30.919 Transfer size: 4096 bytes 00:06:30.919 Vector count 2 00:06:30.919 Module: software 00:06:30.919 Queue depth: 32 00:06:30.919 Allocate depth: 32 00:06:30.919 # threads/core: 1 00:06:30.919 Run time: 1 seconds 00:06:30.919 Verify: Yes 00:06:30.919 00:06:30.919 Running for 1 seconds... 00:06:30.919 00:06:30.919 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:30.919 ------------------------------------------------------------------------------------ 00:06:30.920 0,0 433952/s 3390 MiB/s 0 0 00:06:30.920 ==================================================================================== 00:06:30.920 Total 433952/s 1695 MiB/s 0 0' 00:06:30.920 07:12:32 -- accel/accel.sh@20 -- # IFS=: 00:06:30.920 07:12:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:30.920 07:12:32 -- accel/accel.sh@20 -- # read -r var val 00:06:30.920 07:12:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:30.920 07:12:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.920 07:12:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.920 07:12:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.920 07:12:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.920 07:12:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.920 07:12:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.920 07:12:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.920 07:12:32 -- accel/accel.sh@42 -- # jq -r . 00:06:31.178 [2024-11-04 07:12:32.779674] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:31.178 [2024-11-04 07:12:32.779768] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70482 ] 00:06:31.178 [2024-11-04 07:12:32.917739] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.178 [2024-11-04 07:12:32.969665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.437 07:12:33 -- accel/accel.sh@21 -- # val= 00:06:31.437 07:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.437 07:12:33 -- accel/accel.sh@20 -- # IFS=: 00:06:31.437 07:12:33 -- accel/accel.sh@20 -- # read -r var val 00:06:31.437 07:12:33 -- accel/accel.sh@21 -- # val= 00:06:31.437 07:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.437 07:12:33 -- accel/accel.sh@20 -- # IFS=: 00:06:31.437 07:12:33 -- accel/accel.sh@20 -- # read -r var val 00:06:31.437 07:12:33 -- accel/accel.sh@21 -- # val=0x1 00:06:31.437 07:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.437 07:12:33 -- accel/accel.sh@20 -- # IFS=: 00:06:31.437 07:12:33 -- accel/accel.sh@20 -- # read -r var val 00:06:31.437 07:12:33 -- accel/accel.sh@21 -- # val= 00:06:31.437 07:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.437 07:12:33 -- accel/accel.sh@20 -- # IFS=: 00:06:31.437 07:12:33 -- accel/accel.sh@20 -- # read -r var val 00:06:31.437 07:12:33 -- accel/accel.sh@21 -- # val= 00:06:31.437 07:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.437 07:12:33 -- accel/accel.sh@20 -- # IFS=: 00:06:31.437 07:12:33 -- accel/accel.sh@20 -- # read -r var val 00:06:31.437 07:12:33 -- accel/accel.sh@21 -- # val=crc32c 00:06:31.437 07:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.437 07:12:33 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:31.437 07:12:33 -- accel/accel.sh@20 -- # IFS=: 00:06:31.437 07:12:33 -- accel/accel.sh@20 -- # read -r var val 00:06:31.437 07:12:33 -- accel/accel.sh@21 -- # val=0 00:06:31.437 07:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.437 07:12:33 -- accel/accel.sh@20 -- # IFS=: 00:06:31.437 07:12:33 -- accel/accel.sh@20 -- # read -r var val 00:06:31.437 07:12:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:31.437 07:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.437 07:12:33 -- accel/accel.sh@20 -- # IFS=: 00:06:31.437 07:12:33 -- accel/accel.sh@20 -- # read -r var val 00:06:31.437 07:12:33 -- accel/accel.sh@21 -- # val= 00:06:31.437 07:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.437 07:12:33 -- accel/accel.sh@20 -- # IFS=: 00:06:31.437 07:12:33 -- accel/accel.sh@20 -- # read -r var val 00:06:31.437 07:12:33 -- accel/accel.sh@21 -- # val=software 00:06:31.437 07:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.437 07:12:33 -- accel/accel.sh@23 -- # accel_module=software 00:06:31.437 07:12:33 -- accel/accel.sh@20 -- # IFS=: 00:06:31.437 07:12:33 -- accel/accel.sh@20 -- # read -r var val 00:06:31.437 07:12:33 -- accel/accel.sh@21 -- # val=32 00:06:31.437 07:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.437 07:12:33 -- accel/accel.sh@20 -- # IFS=: 00:06:31.437 07:12:33 -- accel/accel.sh@20 -- # read -r var val 00:06:31.437 07:12:33 -- accel/accel.sh@21 -- # val=32 00:06:31.437 07:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.437 07:12:33 -- accel/accel.sh@20 -- # IFS=: 00:06:31.437 07:12:33 -- accel/accel.sh@20 -- # read -r var val 00:06:31.437 07:12:33 -- accel/accel.sh@21 -- # val=1 00:06:31.437 07:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.437 07:12:33 -- accel/accel.sh@20 -- # IFS=: 00:06:31.437 07:12:33 -- accel/accel.sh@20 -- # read -r var val 00:06:31.437 07:12:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:31.437 07:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.437 07:12:33 -- accel/accel.sh@20 -- # IFS=: 00:06:31.437 07:12:33 -- accel/accel.sh@20 -- # read -r var val 00:06:31.437 07:12:33 -- accel/accel.sh@21 -- # val=Yes 00:06:31.437 07:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.437 07:12:33 -- accel/accel.sh@20 -- # IFS=: 00:06:31.437 07:12:33 -- accel/accel.sh@20 -- # read -r var val 00:06:31.437 07:12:33 -- accel/accel.sh@21 -- # val= 00:06:31.437 07:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.437 07:12:33 -- accel/accel.sh@20 -- # IFS=: 00:06:31.437 07:12:33 -- accel/accel.sh@20 -- # read -r var val 00:06:31.437 07:12:33 -- accel/accel.sh@21 -- # val= 00:06:31.437 07:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.437 07:12:33 -- accel/accel.sh@20 -- # IFS=: 00:06:31.437 07:12:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.373 07:12:34 -- accel/accel.sh@21 -- # val= 00:06:32.373 07:12:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.373 07:12:34 -- accel/accel.sh@20 -- # IFS=: 00:06:32.373 07:12:34 -- accel/accel.sh@20 -- # read -r var val 00:06:32.373 07:12:34 -- accel/accel.sh@21 -- # val= 00:06:32.373 07:12:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.373 07:12:34 -- accel/accel.sh@20 -- # IFS=: 00:06:32.373 07:12:34 -- accel/accel.sh@20 -- # read -r var val 00:06:32.373 07:12:34 -- accel/accel.sh@21 -- # val= 00:06:32.373 07:12:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.373 07:12:34 -- accel/accel.sh@20 -- # IFS=: 00:06:32.373 07:12:34 -- accel/accel.sh@20 -- # read -r var val 00:06:32.373 07:12:34 -- accel/accel.sh@21 -- # val= 00:06:32.373 07:12:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.373 07:12:34 -- accel/accel.sh@20 -- # IFS=: 00:06:32.373 07:12:34 -- accel/accel.sh@20 -- # read -r var val 00:06:32.373 07:12:34 -- accel/accel.sh@21 -- # val= 00:06:32.373 07:12:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.373 07:12:34 -- accel/accel.sh@20 -- # IFS=: 00:06:32.373 07:12:34 -- accel/accel.sh@20 -- # read -r var val 00:06:32.373 07:12:34 -- accel/accel.sh@21 -- # val= 00:06:32.373 07:12:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.373 07:12:34 -- accel/accel.sh@20 -- # IFS=: 00:06:32.373 07:12:34 -- accel/accel.sh@20 -- # read -r var val 00:06:32.373 07:12:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:32.373 07:12:34 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:32.373 07:12:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.373 00:06:32.373 real 0m2.801s 00:06:32.373 user 0m2.371s 00:06:32.373 sys 0m0.228s 00:06:32.373 07:12:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.373 07:12:34 -- common/autotest_common.sh@10 -- # set +x 00:06:32.373 ************************************ 00:06:32.373 END TEST accel_crc32c_C2 00:06:32.373 ************************************ 00:06:32.373 07:12:34 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:32.373 07:12:34 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:32.373 07:12:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:32.373 07:12:34 -- common/autotest_common.sh@10 -- # set +x 00:06:32.373 ************************************ 00:06:32.373 START TEST accel_copy 00:06:32.373 ************************************ 00:06:32.373 07:12:34 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:06:32.373 07:12:34 -- accel/accel.sh@16 -- # local accel_opc 00:06:32.373 07:12:34 -- accel/accel.sh@17 -- # local accel_module 00:06:32.373 07:12:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:32.373 07:12:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:32.373 07:12:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.373 07:12:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.373 07:12:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.373 07:12:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.373 07:12:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.373 07:12:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.373 07:12:34 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.373 07:12:34 -- accel/accel.sh@42 -- # jq -r . 00:06:32.632 [2024-11-04 07:12:34.228046] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:32.632 [2024-11-04 07:12:34.228332] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70515 ] 00:06:32.632 [2024-11-04 07:12:34.360558] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.632 [2024-11-04 07:12:34.426974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.009 07:12:35 -- accel/accel.sh@18 -- # out=' 00:06:34.010 SPDK Configuration: 00:06:34.010 Core mask: 0x1 00:06:34.010 00:06:34.010 Accel Perf Configuration: 00:06:34.010 Workload Type: copy 00:06:34.010 Transfer size: 4096 bytes 00:06:34.010 Vector count 1 00:06:34.010 Module: software 00:06:34.010 Queue depth: 32 00:06:34.010 Allocate depth: 32 00:06:34.010 # threads/core: 1 00:06:34.010 Run time: 1 seconds 00:06:34.010 Verify: Yes 00:06:34.010 00:06:34.010 Running for 1 seconds... 00:06:34.010 00:06:34.010 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:34.010 ------------------------------------------------------------------------------------ 00:06:34.010 0,0 388256/s 1516 MiB/s 0 0 00:06:34.010 ==================================================================================== 00:06:34.010 Total 388256/s 1516 MiB/s 0 0' 00:06:34.010 07:12:35 -- accel/accel.sh@20 -- # IFS=: 00:06:34.010 07:12:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:34.010 07:12:35 -- accel/accel.sh@20 -- # read -r var val 00:06:34.010 07:12:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:34.010 07:12:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.010 07:12:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.010 07:12:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.010 07:12:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.010 07:12:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.010 07:12:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.010 07:12:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.010 07:12:35 -- accel/accel.sh@42 -- # jq -r . 00:06:34.010 [2024-11-04 07:12:35.624960] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:34.010 [2024-11-04 07:12:35.625040] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70536 ] 00:06:34.010 [2024-11-04 07:12:35.748114] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.010 [2024-11-04 07:12:35.798746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.269 07:12:35 -- accel/accel.sh@21 -- # val= 00:06:34.269 07:12:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.269 07:12:35 -- accel/accel.sh@20 -- # IFS=: 00:06:34.269 07:12:35 -- accel/accel.sh@20 -- # read -r var val 00:06:34.269 07:12:35 -- accel/accel.sh@21 -- # val= 00:06:34.269 07:12:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.269 07:12:35 -- accel/accel.sh@20 -- # IFS=: 00:06:34.269 07:12:35 -- accel/accel.sh@20 -- # read -r var val 00:06:34.269 07:12:35 -- accel/accel.sh@21 -- # val=0x1 00:06:34.269 07:12:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.269 07:12:35 -- accel/accel.sh@20 -- # IFS=: 00:06:34.269 07:12:35 -- accel/accel.sh@20 -- # read -r var val 00:06:34.269 07:12:35 -- accel/accel.sh@21 -- # val= 00:06:34.269 07:12:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.269 07:12:35 -- accel/accel.sh@20 -- # IFS=: 00:06:34.269 07:12:35 -- accel/accel.sh@20 -- # read -r var val 00:06:34.269 07:12:35 -- accel/accel.sh@21 -- # val= 00:06:34.269 07:12:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.269 07:12:35 -- accel/accel.sh@20 -- # IFS=: 00:06:34.269 07:12:35 -- accel/accel.sh@20 -- # read -r var val 00:06:34.269 07:12:35 -- accel/accel.sh@21 -- # val=copy 00:06:34.269 07:12:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.269 07:12:35 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:34.269 07:12:35 -- accel/accel.sh@20 -- # IFS=: 00:06:34.269 07:12:35 -- accel/accel.sh@20 -- # read -r var val 00:06:34.269 07:12:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:34.269 07:12:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.269 07:12:35 -- accel/accel.sh@20 -- # IFS=: 00:06:34.269 07:12:35 -- accel/accel.sh@20 -- # read -r var val 00:06:34.269 07:12:35 -- accel/accel.sh@21 -- # val= 00:06:34.269 07:12:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.269 07:12:35 -- accel/accel.sh@20 -- # IFS=: 00:06:34.269 07:12:35 -- accel/accel.sh@20 -- # read -r var val 00:06:34.269 07:12:35 -- accel/accel.sh@21 -- # val=software 00:06:34.269 07:12:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.269 07:12:35 -- accel/accel.sh@23 -- # accel_module=software 00:06:34.269 07:12:35 -- accel/accel.sh@20 -- # IFS=: 00:06:34.269 07:12:35 -- accel/accel.sh@20 -- # read -r var val 00:06:34.269 07:12:35 -- accel/accel.sh@21 -- # val=32 00:06:34.269 07:12:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.269 07:12:35 -- accel/accel.sh@20 -- # IFS=: 00:06:34.269 07:12:35 -- accel/accel.sh@20 -- # read -r var val 00:06:34.269 07:12:35 -- accel/accel.sh@21 -- # val=32 00:06:34.269 07:12:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.269 07:12:35 -- accel/accel.sh@20 -- # IFS=: 00:06:34.269 07:12:35 -- accel/accel.sh@20 -- # read -r var val 00:06:34.269 07:12:35 -- accel/accel.sh@21 -- # val=1 00:06:34.269 07:12:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.269 07:12:35 -- accel/accel.sh@20 -- # IFS=: 00:06:34.269 07:12:35 -- accel/accel.sh@20 -- # read -r var val 00:06:34.269 07:12:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:34.269 07:12:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.269 07:12:35 -- accel/accel.sh@20 -- # IFS=: 00:06:34.269 07:12:35 -- accel/accel.sh@20 -- # read -r var val 00:06:34.269 07:12:35 -- accel/accel.sh@21 -- # val=Yes 00:06:34.269 07:12:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.269 07:12:35 -- accel/accel.sh@20 -- # IFS=: 00:06:34.269 07:12:35 -- accel/accel.sh@20 -- # read -r var val 00:06:34.269 07:12:35 -- accel/accel.sh@21 -- # val= 00:06:34.269 07:12:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.269 07:12:35 -- accel/accel.sh@20 -- # IFS=: 00:06:34.269 07:12:35 -- accel/accel.sh@20 -- # read -r var val 00:06:34.269 07:12:35 -- accel/accel.sh@21 -- # val= 00:06:34.269 07:12:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.269 07:12:35 -- accel/accel.sh@20 -- # IFS=: 00:06:34.269 07:12:35 -- accel/accel.sh@20 -- # read -r var val 00:06:35.207 07:12:36 -- accel/accel.sh@21 -- # val= 00:06:35.207 07:12:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.207 07:12:36 -- accel/accel.sh@20 -- # IFS=: 00:06:35.207 07:12:36 -- accel/accel.sh@20 -- # read -r var val 00:06:35.207 07:12:36 -- accel/accel.sh@21 -- # val= 00:06:35.207 07:12:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.207 07:12:36 -- accel/accel.sh@20 -- # IFS=: 00:06:35.207 07:12:36 -- accel/accel.sh@20 -- # read -r var val 00:06:35.207 07:12:36 -- accel/accel.sh@21 -- # val= 00:06:35.207 07:12:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.207 07:12:36 -- accel/accel.sh@20 -- # IFS=: 00:06:35.207 07:12:36 -- accel/accel.sh@20 -- # read -r var val 00:06:35.207 07:12:36 -- accel/accel.sh@21 -- # val= 00:06:35.207 ************************************ 00:06:35.207 END TEST accel_copy 00:06:35.207 ************************************ 00:06:35.207 07:12:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.207 07:12:36 -- accel/accel.sh@20 -- # IFS=: 00:06:35.207 07:12:36 -- accel/accel.sh@20 -- # read -r var val 00:06:35.207 07:12:36 -- accel/accel.sh@21 -- # val= 00:06:35.207 07:12:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.207 07:12:36 -- accel/accel.sh@20 -- # IFS=: 00:06:35.207 07:12:36 -- accel/accel.sh@20 -- # read -r var val 00:06:35.207 07:12:36 -- accel/accel.sh@21 -- # val= 00:06:35.207 07:12:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.207 07:12:36 -- accel/accel.sh@20 -- # IFS=: 00:06:35.207 07:12:36 -- accel/accel.sh@20 -- # read -r var val 00:06:35.207 07:12:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:35.207 07:12:36 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:35.207 07:12:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.207 00:06:35.207 real 0m2.773s 00:06:35.207 user 0m2.368s 00:06:35.207 sys 0m0.207s 00:06:35.207 07:12:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.207 07:12:36 -- common/autotest_common.sh@10 -- # set +x 00:06:35.207 07:12:37 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:35.207 07:12:37 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:35.207 07:12:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:35.207 07:12:37 -- common/autotest_common.sh@10 -- # set +x 00:06:35.207 ************************************ 00:06:35.207 START TEST accel_fill 00:06:35.207 ************************************ 00:06:35.207 07:12:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:35.207 07:12:37 -- accel/accel.sh@16 -- # local accel_opc 00:06:35.207 07:12:37 -- accel/accel.sh@17 -- # local accel_module 00:06:35.207 07:12:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:35.207 07:12:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.207 07:12:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:35.207 07:12:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.207 07:12:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.207 07:12:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.207 07:12:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.207 07:12:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.207 07:12:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.207 07:12:37 -- accel/accel.sh@42 -- # jq -r . 00:06:35.470 [2024-11-04 07:12:37.062246] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:35.470 [2024-11-04 07:12:37.062542] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70565 ] 00:06:35.470 [2024-11-04 07:12:37.199336] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.470 [2024-11-04 07:12:37.257508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.846 07:12:38 -- accel/accel.sh@18 -- # out=' 00:06:36.846 SPDK Configuration: 00:06:36.846 Core mask: 0x1 00:06:36.846 00:06:36.846 Accel Perf Configuration: 00:06:36.846 Workload Type: fill 00:06:36.846 Fill pattern: 0x80 00:06:36.846 Transfer size: 4096 bytes 00:06:36.846 Vector count 1 00:06:36.846 Module: software 00:06:36.846 Queue depth: 64 00:06:36.846 Allocate depth: 64 00:06:36.846 # threads/core: 1 00:06:36.846 Run time: 1 seconds 00:06:36.846 Verify: Yes 00:06:36.846 00:06:36.846 Running for 1 seconds... 00:06:36.846 00:06:36.846 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:36.846 ------------------------------------------------------------------------------------ 00:06:36.846 0,0 563840/s 2202 MiB/s 0 0 00:06:36.846 ==================================================================================== 00:06:36.846 Total 563840/s 2202 MiB/s 0 0' 00:06:36.846 07:12:38 -- accel/accel.sh@20 -- # IFS=: 00:06:36.846 07:12:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:36.846 07:12:38 -- accel/accel.sh@20 -- # read -r var val 00:06:36.846 07:12:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:36.846 07:12:38 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.846 07:12:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.846 07:12:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.846 07:12:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.846 07:12:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.846 07:12:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.846 07:12:38 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.846 07:12:38 -- accel/accel.sh@42 -- # jq -r . 00:06:36.846 [2024-11-04 07:12:38.465728] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:36.846 [2024-11-04 07:12:38.465819] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70585 ] 00:06:36.846 [2024-11-04 07:12:38.593235] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.846 [2024-11-04 07:12:38.644145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.105 07:12:38 -- accel/accel.sh@21 -- # val= 00:06:37.105 07:12:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.105 07:12:38 -- accel/accel.sh@20 -- # IFS=: 00:06:37.105 07:12:38 -- accel/accel.sh@20 -- # read -r var val 00:06:37.105 07:12:38 -- accel/accel.sh@21 -- # val= 00:06:37.105 07:12:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.105 07:12:38 -- accel/accel.sh@20 -- # IFS=: 00:06:37.105 07:12:38 -- accel/accel.sh@20 -- # read -r var val 00:06:37.105 07:12:38 -- accel/accel.sh@21 -- # val=0x1 00:06:37.105 07:12:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.105 07:12:38 -- accel/accel.sh@20 -- # IFS=: 00:06:37.105 07:12:38 -- accel/accel.sh@20 -- # read -r var val 00:06:37.105 07:12:38 -- accel/accel.sh@21 -- # val= 00:06:37.105 07:12:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.105 07:12:38 -- accel/accel.sh@20 -- # IFS=: 00:06:37.105 07:12:38 -- accel/accel.sh@20 -- # read -r var val 00:06:37.105 07:12:38 -- accel/accel.sh@21 -- # val= 00:06:37.105 07:12:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.105 07:12:38 -- accel/accel.sh@20 -- # IFS=: 00:06:37.105 07:12:38 -- accel/accel.sh@20 -- # read -r var val 00:06:37.105 07:12:38 -- accel/accel.sh@21 -- # val=fill 00:06:37.105 07:12:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.105 07:12:38 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:37.105 07:12:38 -- accel/accel.sh@20 -- # IFS=: 00:06:37.105 07:12:38 -- accel/accel.sh@20 -- # read -r var val 00:06:37.105 07:12:38 -- accel/accel.sh@21 -- # val=0x80 00:06:37.105 07:12:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.105 07:12:38 -- accel/accel.sh@20 -- # IFS=: 00:06:37.105 07:12:38 -- accel/accel.sh@20 -- # read -r var val 00:06:37.105 07:12:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:37.105 07:12:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.105 07:12:38 -- accel/accel.sh@20 -- # IFS=: 00:06:37.105 07:12:38 -- accel/accel.sh@20 -- # read -r var val 00:06:37.105 07:12:38 -- accel/accel.sh@21 -- # val= 00:06:37.105 07:12:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.105 07:12:38 -- accel/accel.sh@20 -- # IFS=: 00:06:37.106 07:12:38 -- accel/accel.sh@20 -- # read -r var val 00:06:37.106 07:12:38 -- accel/accel.sh@21 -- # val=software 00:06:37.106 07:12:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.106 07:12:38 -- accel/accel.sh@23 -- # accel_module=software 00:06:37.106 07:12:38 -- accel/accel.sh@20 -- # IFS=: 00:06:37.106 07:12:38 -- accel/accel.sh@20 -- # read -r var val 00:06:37.106 07:12:38 -- accel/accel.sh@21 -- # val=64 00:06:37.106 07:12:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.106 07:12:38 -- accel/accel.sh@20 -- # IFS=: 00:06:37.106 07:12:38 -- accel/accel.sh@20 -- # read -r var val 00:06:37.106 07:12:38 -- accel/accel.sh@21 -- # val=64 00:06:37.106 07:12:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.106 07:12:38 -- accel/accel.sh@20 -- # IFS=: 00:06:37.106 07:12:38 -- accel/accel.sh@20 -- # read -r var val 00:06:37.106 07:12:38 -- accel/accel.sh@21 -- # val=1 00:06:37.106 07:12:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.106 07:12:38 -- accel/accel.sh@20 -- # IFS=: 00:06:37.106 07:12:38 -- accel/accel.sh@20 -- # read -r var val 00:06:37.106 07:12:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:37.106 07:12:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.106 07:12:38 -- accel/accel.sh@20 -- # IFS=: 00:06:37.106 07:12:38 -- accel/accel.sh@20 -- # read -r var val 00:06:37.106 07:12:38 -- accel/accel.sh@21 -- # val=Yes 00:06:37.106 07:12:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.106 07:12:38 -- accel/accel.sh@20 -- # IFS=: 00:06:37.106 07:12:38 -- accel/accel.sh@20 -- # read -r var val 00:06:37.106 07:12:38 -- accel/accel.sh@21 -- # val= 00:06:37.106 07:12:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.106 07:12:38 -- accel/accel.sh@20 -- # IFS=: 00:06:37.106 07:12:38 -- accel/accel.sh@20 -- # read -r var val 00:06:37.106 07:12:38 -- accel/accel.sh@21 -- # val= 00:06:37.106 07:12:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.106 07:12:38 -- accel/accel.sh@20 -- # IFS=: 00:06:37.106 07:12:38 -- accel/accel.sh@20 -- # read -r var val 00:06:38.043 07:12:39 -- accel/accel.sh@21 -- # val= 00:06:38.043 07:12:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.043 07:12:39 -- accel/accel.sh@20 -- # IFS=: 00:06:38.043 07:12:39 -- accel/accel.sh@20 -- # read -r var val 00:06:38.043 07:12:39 -- accel/accel.sh@21 -- # val= 00:06:38.043 07:12:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.043 07:12:39 -- accel/accel.sh@20 -- # IFS=: 00:06:38.043 07:12:39 -- accel/accel.sh@20 -- # read -r var val 00:06:38.043 07:12:39 -- accel/accel.sh@21 -- # val= 00:06:38.043 07:12:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.043 07:12:39 -- accel/accel.sh@20 -- # IFS=: 00:06:38.043 07:12:39 -- accel/accel.sh@20 -- # read -r var val 00:06:38.043 07:12:39 -- accel/accel.sh@21 -- # val= 00:06:38.043 07:12:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.043 07:12:39 -- accel/accel.sh@20 -- # IFS=: 00:06:38.043 07:12:39 -- accel/accel.sh@20 -- # read -r var val 00:06:38.043 07:12:39 -- accel/accel.sh@21 -- # val= 00:06:38.043 07:12:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.043 07:12:39 -- accel/accel.sh@20 -- # IFS=: 00:06:38.043 07:12:39 -- accel/accel.sh@20 -- # read -r var val 00:06:38.043 07:12:39 -- accel/accel.sh@21 -- # val= 00:06:38.043 07:12:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.043 07:12:39 -- accel/accel.sh@20 -- # IFS=: 00:06:38.043 07:12:39 -- accel/accel.sh@20 -- # read -r var val 00:06:38.043 07:12:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:38.043 07:12:39 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:38.043 07:12:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.043 ************************************ 00:06:38.043 END TEST accel_fill 00:06:38.043 ************************************ 00:06:38.043 00:06:38.043 real 0m2.794s 00:06:38.043 user 0m2.374s 00:06:38.043 sys 0m0.219s 00:06:38.043 07:12:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.043 07:12:39 -- common/autotest_common.sh@10 -- # set +x 00:06:38.043 07:12:39 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:38.043 07:12:39 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:38.043 07:12:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:38.043 07:12:39 -- common/autotest_common.sh@10 -- # set +x 00:06:38.043 ************************************ 00:06:38.043 START TEST accel_copy_crc32c 00:06:38.043 ************************************ 00:06:38.043 07:12:39 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:06:38.043 07:12:39 -- accel/accel.sh@16 -- # local accel_opc 00:06:38.043 07:12:39 -- accel/accel.sh@17 -- # local accel_module 00:06:38.301 07:12:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:38.301 07:12:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:38.301 07:12:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.301 07:12:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.301 07:12:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.301 07:12:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.301 07:12:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.301 07:12:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.301 07:12:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.301 07:12:39 -- accel/accel.sh@42 -- # jq -r . 00:06:38.301 [2024-11-04 07:12:39.906377] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:38.301 [2024-11-04 07:12:39.906472] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70619 ] 00:06:38.301 [2024-11-04 07:12:40.042976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.301 [2024-11-04 07:12:40.103607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.685 07:12:41 -- accel/accel.sh@18 -- # out=' 00:06:39.685 SPDK Configuration: 00:06:39.685 Core mask: 0x1 00:06:39.685 00:06:39.685 Accel Perf Configuration: 00:06:39.685 Workload Type: copy_crc32c 00:06:39.685 CRC-32C seed: 0 00:06:39.685 Vector size: 4096 bytes 00:06:39.685 Transfer size: 4096 bytes 00:06:39.685 Vector count 1 00:06:39.685 Module: software 00:06:39.685 Queue depth: 32 00:06:39.685 Allocate depth: 32 00:06:39.685 # threads/core: 1 00:06:39.685 Run time: 1 seconds 00:06:39.685 Verify: Yes 00:06:39.685 00:06:39.685 Running for 1 seconds... 00:06:39.686 00:06:39.686 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:39.686 ------------------------------------------------------------------------------------ 00:06:39.686 0,0 305952/s 1195 MiB/s 0 0 00:06:39.686 ==================================================================================== 00:06:39.686 Total 305952/s 1195 MiB/s 0 0' 00:06:39.686 07:12:41 -- accel/accel.sh@20 -- # IFS=: 00:06:39.686 07:12:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:39.686 07:12:41 -- accel/accel.sh@20 -- # read -r var val 00:06:39.686 07:12:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:39.686 07:12:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.686 07:12:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:39.686 07:12:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.686 07:12:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.686 07:12:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:39.686 07:12:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:39.686 07:12:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:39.686 07:12:41 -- accel/accel.sh@42 -- # jq -r . 00:06:39.686 [2024-11-04 07:12:41.311919] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:39.686 [2024-11-04 07:12:41.312012] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70633 ] 00:06:39.686 [2024-11-04 07:12:41.450113] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.686 [2024-11-04 07:12:41.500146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.976 07:12:41 -- accel/accel.sh@21 -- # val= 00:06:39.976 07:12:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.976 07:12:41 -- accel/accel.sh@20 -- # IFS=: 00:06:39.976 07:12:41 -- accel/accel.sh@20 -- # read -r var val 00:06:39.976 07:12:41 -- accel/accel.sh@21 -- # val= 00:06:39.976 07:12:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.976 07:12:41 -- accel/accel.sh@20 -- # IFS=: 00:06:39.976 07:12:41 -- accel/accel.sh@20 -- # read -r var val 00:06:39.976 07:12:41 -- accel/accel.sh@21 -- # val=0x1 00:06:39.976 07:12:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.976 07:12:41 -- accel/accel.sh@20 -- # IFS=: 00:06:39.976 07:12:41 -- accel/accel.sh@20 -- # read -r var val 00:06:39.976 07:12:41 -- accel/accel.sh@21 -- # val= 00:06:39.976 07:12:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.976 07:12:41 -- accel/accel.sh@20 -- # IFS=: 00:06:39.976 07:12:41 -- accel/accel.sh@20 -- # read -r var val 00:06:39.976 07:12:41 -- accel/accel.sh@21 -- # val= 00:06:39.976 07:12:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.976 07:12:41 -- accel/accel.sh@20 -- # IFS=: 00:06:39.976 07:12:41 -- accel/accel.sh@20 -- # read -r var val 00:06:39.976 07:12:41 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:39.976 07:12:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.976 07:12:41 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:39.976 07:12:41 -- accel/accel.sh@20 -- # IFS=: 00:06:39.976 07:12:41 -- accel/accel.sh@20 -- # read -r var val 00:06:39.976 07:12:41 -- accel/accel.sh@21 -- # val=0 00:06:39.976 07:12:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.976 07:12:41 -- accel/accel.sh@20 -- # IFS=: 00:06:39.976 07:12:41 -- accel/accel.sh@20 -- # read -r var val 00:06:39.976 07:12:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:39.976 07:12:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.976 07:12:41 -- accel/accel.sh@20 -- # IFS=: 00:06:39.976 07:12:41 -- accel/accel.sh@20 -- # read -r var val 00:06:39.976 07:12:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:39.976 07:12:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.976 07:12:41 -- accel/accel.sh@20 -- # IFS=: 00:06:39.976 07:12:41 -- accel/accel.sh@20 -- # read -r var val 00:06:39.976 07:12:41 -- accel/accel.sh@21 -- # val= 00:06:39.976 07:12:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.976 07:12:41 -- accel/accel.sh@20 -- # IFS=: 00:06:39.976 07:12:41 -- accel/accel.sh@20 -- # read -r var val 00:06:39.976 07:12:41 -- accel/accel.sh@21 -- # val=software 00:06:39.976 07:12:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.976 07:12:41 -- accel/accel.sh@23 -- # accel_module=software 00:06:39.976 07:12:41 -- accel/accel.sh@20 -- # IFS=: 00:06:39.976 07:12:41 -- accel/accel.sh@20 -- # read -r var val 00:06:39.976 07:12:41 -- accel/accel.sh@21 -- # val=32 00:06:39.976 07:12:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.976 07:12:41 -- accel/accel.sh@20 -- # IFS=: 00:06:39.976 07:12:41 -- accel/accel.sh@20 -- # read -r var val 00:06:39.976 07:12:41 -- accel/accel.sh@21 -- # val=32 00:06:39.976 07:12:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.976 07:12:41 -- accel/accel.sh@20 -- # IFS=: 00:06:39.976 07:12:41 -- accel/accel.sh@20 -- # read -r var val 00:06:39.976 07:12:41 -- accel/accel.sh@21 -- # val=1 00:06:39.976 07:12:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.976 07:12:41 -- accel/accel.sh@20 -- # IFS=: 00:06:39.976 07:12:41 -- accel/accel.sh@20 -- # read -r var val 00:06:39.976 07:12:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:39.976 07:12:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.976 07:12:41 -- accel/accel.sh@20 -- # IFS=: 00:06:39.976 07:12:41 -- accel/accel.sh@20 -- # read -r var val 00:06:39.976 07:12:41 -- accel/accel.sh@21 -- # val=Yes 00:06:39.976 07:12:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.976 07:12:41 -- accel/accel.sh@20 -- # IFS=: 00:06:39.976 07:12:41 -- accel/accel.sh@20 -- # read -r var val 00:06:39.976 07:12:41 -- accel/accel.sh@21 -- # val= 00:06:39.976 07:12:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.976 07:12:41 -- accel/accel.sh@20 -- # IFS=: 00:06:39.976 07:12:41 -- accel/accel.sh@20 -- # read -r var val 00:06:39.976 07:12:41 -- accel/accel.sh@21 -- # val= 00:06:39.976 07:12:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.976 07:12:41 -- accel/accel.sh@20 -- # IFS=: 00:06:39.976 07:12:41 -- accel/accel.sh@20 -- # read -r var val 00:06:40.924 07:12:42 -- accel/accel.sh@21 -- # val= 00:06:40.924 07:12:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.924 07:12:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.924 07:12:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.924 07:12:42 -- accel/accel.sh@21 -- # val= 00:06:40.924 07:12:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.924 07:12:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.924 07:12:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.924 07:12:42 -- accel/accel.sh@21 -- # val= 00:06:40.924 07:12:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.924 07:12:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.924 07:12:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.924 07:12:42 -- accel/accel.sh@21 -- # val= 00:06:40.924 07:12:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.924 07:12:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.924 07:12:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.924 07:12:42 -- accel/accel.sh@21 -- # val= 00:06:40.924 07:12:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.924 07:12:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.924 07:12:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.924 07:12:42 -- accel/accel.sh@21 -- # val= 00:06:40.924 07:12:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.924 07:12:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.924 07:12:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.924 07:12:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:40.924 07:12:42 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:40.924 07:12:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.924 ************************************ 00:06:40.924 END TEST accel_copy_crc32c 00:06:40.924 ************************************ 00:06:40.924 00:06:40.924 real 0m2.805s 00:06:40.924 user 0m2.381s 00:06:40.924 sys 0m0.224s 00:06:40.924 07:12:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.924 07:12:42 -- common/autotest_common.sh@10 -- # set +x 00:06:40.924 07:12:42 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:40.924 07:12:42 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:40.924 07:12:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:40.924 07:12:42 -- common/autotest_common.sh@10 -- # set +x 00:06:40.924 ************************************ 00:06:40.924 START TEST accel_copy_crc32c_C2 00:06:40.924 ************************************ 00:06:40.924 07:12:42 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:40.924 07:12:42 -- accel/accel.sh@16 -- # local accel_opc 00:06:40.924 07:12:42 -- accel/accel.sh@17 -- # local accel_module 00:06:40.924 07:12:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:40.924 07:12:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:40.924 07:12:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.924 07:12:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.924 07:12:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.925 07:12:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.925 07:12:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.925 07:12:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.925 07:12:42 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.925 07:12:42 -- accel/accel.sh@42 -- # jq -r . 00:06:40.925 [2024-11-04 07:12:42.762392] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:40.925 [2024-11-04 07:12:42.762484] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70672 ] 00:06:41.183 [2024-11-04 07:12:42.897676] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.183 [2024-11-04 07:12:42.950699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.559 07:12:44 -- accel/accel.sh@18 -- # out=' 00:06:42.559 SPDK Configuration: 00:06:42.559 Core mask: 0x1 00:06:42.559 00:06:42.559 Accel Perf Configuration: 00:06:42.559 Workload Type: copy_crc32c 00:06:42.559 CRC-32C seed: 0 00:06:42.559 Vector size: 4096 bytes 00:06:42.559 Transfer size: 8192 bytes 00:06:42.559 Vector count 2 00:06:42.559 Module: software 00:06:42.559 Queue depth: 32 00:06:42.559 Allocate depth: 32 00:06:42.559 # threads/core: 1 00:06:42.559 Run time: 1 seconds 00:06:42.559 Verify: Yes 00:06:42.559 00:06:42.559 Running for 1 seconds... 00:06:42.559 00:06:42.559 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:42.559 ------------------------------------------------------------------------------------ 00:06:42.559 0,0 215520/s 1683 MiB/s 0 0 00:06:42.559 ==================================================================================== 00:06:42.559 Total 215520/s 841 MiB/s 0 0' 00:06:42.559 07:12:44 -- accel/accel.sh@20 -- # IFS=: 00:06:42.559 07:12:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:42.559 07:12:44 -- accel/accel.sh@20 -- # read -r var val 00:06:42.559 07:12:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:42.559 07:12:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.559 07:12:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.559 07:12:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.559 07:12:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.559 07:12:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.559 07:12:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.559 07:12:44 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.559 07:12:44 -- accel/accel.sh@42 -- # jq -r . 00:06:42.559 [2024-11-04 07:12:44.154569] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:42.559 [2024-11-04 07:12:44.154665] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70689 ] 00:06:42.559 [2024-11-04 07:12:44.282748] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.559 [2024-11-04 07:12:44.338604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.559 07:12:44 -- accel/accel.sh@21 -- # val= 00:06:42.559 07:12:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.559 07:12:44 -- accel/accel.sh@20 -- # IFS=: 00:06:42.559 07:12:44 -- accel/accel.sh@20 -- # read -r var val 00:06:42.559 07:12:44 -- accel/accel.sh@21 -- # val= 00:06:42.559 07:12:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.559 07:12:44 -- accel/accel.sh@20 -- # IFS=: 00:06:42.559 07:12:44 -- accel/accel.sh@20 -- # read -r var val 00:06:42.560 07:12:44 -- accel/accel.sh@21 -- # val=0x1 00:06:42.560 07:12:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.560 07:12:44 -- accel/accel.sh@20 -- # IFS=: 00:06:42.560 07:12:44 -- accel/accel.sh@20 -- # read -r var val 00:06:42.560 07:12:44 -- accel/accel.sh@21 -- # val= 00:06:42.560 07:12:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.560 07:12:44 -- accel/accel.sh@20 -- # IFS=: 00:06:42.560 07:12:44 -- accel/accel.sh@20 -- # read -r var val 00:06:42.560 07:12:44 -- accel/accel.sh@21 -- # val= 00:06:42.560 07:12:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.560 07:12:44 -- accel/accel.sh@20 -- # IFS=: 00:06:42.560 07:12:44 -- accel/accel.sh@20 -- # read -r var val 00:06:42.818 07:12:44 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:42.818 07:12:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.818 07:12:44 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:42.818 07:12:44 -- accel/accel.sh@20 -- # IFS=: 00:06:42.818 07:12:44 -- accel/accel.sh@20 -- # read -r var val 00:06:42.818 07:12:44 -- accel/accel.sh@21 -- # val=0 00:06:42.818 07:12:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.818 07:12:44 -- accel/accel.sh@20 -- # IFS=: 00:06:42.818 07:12:44 -- accel/accel.sh@20 -- # read -r var val 00:06:42.818 07:12:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:42.818 07:12:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.818 07:12:44 -- accel/accel.sh@20 -- # IFS=: 00:06:42.818 07:12:44 -- accel/accel.sh@20 -- # read -r var val 00:06:42.818 07:12:44 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:42.818 07:12:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.818 07:12:44 -- accel/accel.sh@20 -- # IFS=: 00:06:42.818 07:12:44 -- accel/accel.sh@20 -- # read -r var val 00:06:42.818 07:12:44 -- accel/accel.sh@21 -- # val= 00:06:42.818 07:12:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.818 07:12:44 -- accel/accel.sh@20 -- # IFS=: 00:06:42.818 07:12:44 -- accel/accel.sh@20 -- # read -r var val 00:06:42.818 07:12:44 -- accel/accel.sh@21 -- # val=software 00:06:42.818 07:12:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.818 07:12:44 -- accel/accel.sh@23 -- # accel_module=software 00:06:42.818 07:12:44 -- accel/accel.sh@20 -- # IFS=: 00:06:42.818 07:12:44 -- accel/accel.sh@20 -- # read -r var val 00:06:42.818 07:12:44 -- accel/accel.sh@21 -- # val=32 00:06:42.818 07:12:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.818 07:12:44 -- accel/accel.sh@20 -- # IFS=: 00:06:42.818 07:12:44 -- accel/accel.sh@20 -- # read -r var val 00:06:42.818 07:12:44 -- accel/accel.sh@21 -- # val=32 00:06:42.818 07:12:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.818 07:12:44 -- accel/accel.sh@20 -- # IFS=: 00:06:42.818 07:12:44 -- accel/accel.sh@20 -- # read -r var val 00:06:42.818 07:12:44 -- accel/accel.sh@21 -- # val=1 00:06:42.818 07:12:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.818 07:12:44 -- accel/accel.sh@20 -- # IFS=: 00:06:42.818 07:12:44 -- accel/accel.sh@20 -- # read -r var val 00:06:42.818 07:12:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:42.818 07:12:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.818 07:12:44 -- accel/accel.sh@20 -- # IFS=: 00:06:42.818 07:12:44 -- accel/accel.sh@20 -- # read -r var val 00:06:42.818 07:12:44 -- accel/accel.sh@21 -- # val=Yes 00:06:42.818 07:12:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.818 07:12:44 -- accel/accel.sh@20 -- # IFS=: 00:06:42.818 07:12:44 -- accel/accel.sh@20 -- # read -r var val 00:06:42.818 07:12:44 -- accel/accel.sh@21 -- # val= 00:06:42.818 07:12:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.818 07:12:44 -- accel/accel.sh@20 -- # IFS=: 00:06:42.818 07:12:44 -- accel/accel.sh@20 -- # read -r var val 00:06:42.818 07:12:44 -- accel/accel.sh@21 -- # val= 00:06:42.818 07:12:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.818 07:12:44 -- accel/accel.sh@20 -- # IFS=: 00:06:42.818 07:12:44 -- accel/accel.sh@20 -- # read -r var val 00:06:43.753 07:12:45 -- accel/accel.sh@21 -- # val= 00:06:43.754 07:12:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.754 07:12:45 -- accel/accel.sh@20 -- # IFS=: 00:06:43.754 07:12:45 -- accel/accel.sh@20 -- # read -r var val 00:06:43.754 07:12:45 -- accel/accel.sh@21 -- # val= 00:06:43.754 07:12:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.754 07:12:45 -- accel/accel.sh@20 -- # IFS=: 00:06:43.754 07:12:45 -- accel/accel.sh@20 -- # read -r var val 00:06:43.754 07:12:45 -- accel/accel.sh@21 -- # val= 00:06:43.754 07:12:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.754 07:12:45 -- accel/accel.sh@20 -- # IFS=: 00:06:43.754 07:12:45 -- accel/accel.sh@20 -- # read -r var val 00:06:43.754 07:12:45 -- accel/accel.sh@21 -- # val= 00:06:43.754 07:12:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.754 07:12:45 -- accel/accel.sh@20 -- # IFS=: 00:06:43.754 07:12:45 -- accel/accel.sh@20 -- # read -r var val 00:06:43.754 07:12:45 -- accel/accel.sh@21 -- # val= 00:06:43.754 07:12:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.754 07:12:45 -- accel/accel.sh@20 -- # IFS=: 00:06:43.754 07:12:45 -- accel/accel.sh@20 -- # read -r var val 00:06:43.754 07:12:45 -- accel/accel.sh@21 -- # val= 00:06:43.754 07:12:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.754 07:12:45 -- accel/accel.sh@20 -- # IFS=: 00:06:43.754 07:12:45 -- accel/accel.sh@20 -- # read -r var val 00:06:43.754 07:12:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:43.754 07:12:45 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:43.754 07:12:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.754 00:06:43.754 real 0m2.791s 00:06:43.754 user 0m2.365s 00:06:43.754 sys 0m0.226s 00:06:43.754 07:12:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.754 ************************************ 00:06:43.754 END TEST accel_copy_crc32c_C2 00:06:43.754 ************************************ 00:06:43.754 07:12:45 -- common/autotest_common.sh@10 -- # set +x 00:06:43.754 07:12:45 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:43.754 07:12:45 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:43.754 07:12:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:43.754 07:12:45 -- common/autotest_common.sh@10 -- # set +x 00:06:43.754 ************************************ 00:06:43.754 START TEST accel_dualcast 00:06:43.754 ************************************ 00:06:43.754 07:12:45 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:06:43.754 07:12:45 -- accel/accel.sh@16 -- # local accel_opc 00:06:43.754 07:12:45 -- accel/accel.sh@17 -- # local accel_module 00:06:43.754 07:12:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:43.754 07:12:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:43.754 07:12:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.754 07:12:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.754 07:12:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.754 07:12:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.754 07:12:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.754 07:12:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.754 07:12:45 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.754 07:12:45 -- accel/accel.sh@42 -- # jq -r . 00:06:44.013 [2024-11-04 07:12:45.608238] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:44.013 [2024-11-04 07:12:45.608347] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70724 ] 00:06:44.013 [2024-11-04 07:12:45.744102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.013 [2024-11-04 07:12:45.796146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.389 07:12:46 -- accel/accel.sh@18 -- # out=' 00:06:45.389 SPDK Configuration: 00:06:45.389 Core mask: 0x1 00:06:45.389 00:06:45.389 Accel Perf Configuration: 00:06:45.389 Workload Type: dualcast 00:06:45.389 Transfer size: 4096 bytes 00:06:45.389 Vector count 1 00:06:45.389 Module: software 00:06:45.389 Queue depth: 32 00:06:45.389 Allocate depth: 32 00:06:45.389 # threads/core: 1 00:06:45.389 Run time: 1 seconds 00:06:45.389 Verify: Yes 00:06:45.389 00:06:45.389 Running for 1 seconds... 00:06:45.389 00:06:45.389 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:45.389 ------------------------------------------------------------------------------------ 00:06:45.389 0,0 420160/s 1641 MiB/s 0 0 00:06:45.389 ==================================================================================== 00:06:45.389 Total 420160/s 1641 MiB/s 0 0' 00:06:45.389 07:12:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:45.389 07:12:46 -- accel/accel.sh@20 -- # IFS=: 00:06:45.389 07:12:46 -- accel/accel.sh@20 -- # read -r var val 00:06:45.389 07:12:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:45.389 07:12:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.389 07:12:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.389 07:12:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.389 07:12:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.389 07:12:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.389 07:12:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.389 07:12:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.389 07:12:46 -- accel/accel.sh@42 -- # jq -r . 00:06:45.389 [2024-11-04 07:12:47.016356] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:45.389 [2024-11-04 07:12:47.016451] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70743 ] 00:06:45.389 [2024-11-04 07:12:47.152416] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.389 [2024-11-04 07:12:47.207191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.649 07:12:47 -- accel/accel.sh@21 -- # val= 00:06:45.649 07:12:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.649 07:12:47 -- accel/accel.sh@20 -- # IFS=: 00:06:45.649 07:12:47 -- accel/accel.sh@20 -- # read -r var val 00:06:45.649 07:12:47 -- accel/accel.sh@21 -- # val= 00:06:45.649 07:12:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.649 07:12:47 -- accel/accel.sh@20 -- # IFS=: 00:06:45.649 07:12:47 -- accel/accel.sh@20 -- # read -r var val 00:06:45.649 07:12:47 -- accel/accel.sh@21 -- # val=0x1 00:06:45.649 07:12:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.649 07:12:47 -- accel/accel.sh@20 -- # IFS=: 00:06:45.649 07:12:47 -- accel/accel.sh@20 -- # read -r var val 00:06:45.649 07:12:47 -- accel/accel.sh@21 -- # val= 00:06:45.649 07:12:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.649 07:12:47 -- accel/accel.sh@20 -- # IFS=: 00:06:45.649 07:12:47 -- accel/accel.sh@20 -- # read -r var val 00:06:45.649 07:12:47 -- accel/accel.sh@21 -- # val= 00:06:45.649 07:12:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.649 07:12:47 -- accel/accel.sh@20 -- # IFS=: 00:06:45.649 07:12:47 -- accel/accel.sh@20 -- # read -r var val 00:06:45.649 07:12:47 -- accel/accel.sh@21 -- # val=dualcast 00:06:45.649 07:12:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.649 07:12:47 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:45.649 07:12:47 -- accel/accel.sh@20 -- # IFS=: 00:06:45.649 07:12:47 -- accel/accel.sh@20 -- # read -r var val 00:06:45.649 07:12:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:45.649 07:12:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.649 07:12:47 -- accel/accel.sh@20 -- # IFS=: 00:06:45.649 07:12:47 -- accel/accel.sh@20 -- # read -r var val 00:06:45.649 07:12:47 -- accel/accel.sh@21 -- # val= 00:06:45.649 07:12:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.649 07:12:47 -- accel/accel.sh@20 -- # IFS=: 00:06:45.649 07:12:47 -- accel/accel.sh@20 -- # read -r var val 00:06:45.649 07:12:47 -- accel/accel.sh@21 -- # val=software 00:06:45.649 07:12:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.649 07:12:47 -- accel/accel.sh@23 -- # accel_module=software 00:06:45.649 07:12:47 -- accel/accel.sh@20 -- # IFS=: 00:06:45.649 07:12:47 -- accel/accel.sh@20 -- # read -r var val 00:06:45.649 07:12:47 -- accel/accel.sh@21 -- # val=32 00:06:45.649 07:12:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.649 07:12:47 -- accel/accel.sh@20 -- # IFS=: 00:06:45.649 07:12:47 -- accel/accel.sh@20 -- # read -r var val 00:06:45.649 07:12:47 -- accel/accel.sh@21 -- # val=32 00:06:45.649 07:12:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.649 07:12:47 -- accel/accel.sh@20 -- # IFS=: 00:06:45.649 07:12:47 -- accel/accel.sh@20 -- # read -r var val 00:06:45.649 07:12:47 -- accel/accel.sh@21 -- # val=1 00:06:45.649 07:12:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.649 07:12:47 -- accel/accel.sh@20 -- # IFS=: 00:06:45.649 07:12:47 -- accel/accel.sh@20 -- # read -r var val 00:06:45.649 07:12:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:45.649 07:12:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.649 07:12:47 -- accel/accel.sh@20 -- # IFS=: 00:06:45.649 07:12:47 -- accel/accel.sh@20 -- # read -r var val 00:06:45.649 07:12:47 -- accel/accel.sh@21 -- # val=Yes 00:06:45.649 07:12:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.649 07:12:47 -- accel/accel.sh@20 -- # IFS=: 00:06:45.649 07:12:47 -- accel/accel.sh@20 -- # read -r var val 00:06:45.649 07:12:47 -- accel/accel.sh@21 -- # val= 00:06:45.649 07:12:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.649 07:12:47 -- accel/accel.sh@20 -- # IFS=: 00:06:45.649 07:12:47 -- accel/accel.sh@20 -- # read -r var val 00:06:45.649 07:12:47 -- accel/accel.sh@21 -- # val= 00:06:45.649 07:12:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.649 07:12:47 -- accel/accel.sh@20 -- # IFS=: 00:06:45.649 07:12:47 -- accel/accel.sh@20 -- # read -r var val 00:06:46.584 07:12:48 -- accel/accel.sh@21 -- # val= 00:06:46.584 07:12:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.584 07:12:48 -- accel/accel.sh@20 -- # IFS=: 00:06:46.584 07:12:48 -- accel/accel.sh@20 -- # read -r var val 00:06:46.584 07:12:48 -- accel/accel.sh@21 -- # val= 00:06:46.584 07:12:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.584 07:12:48 -- accel/accel.sh@20 -- # IFS=: 00:06:46.584 07:12:48 -- accel/accel.sh@20 -- # read -r var val 00:06:46.585 07:12:48 -- accel/accel.sh@21 -- # val= 00:06:46.585 07:12:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.585 07:12:48 -- accel/accel.sh@20 -- # IFS=: 00:06:46.585 07:12:48 -- accel/accel.sh@20 -- # read -r var val 00:06:46.585 07:12:48 -- accel/accel.sh@21 -- # val= 00:06:46.585 07:12:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.585 07:12:48 -- accel/accel.sh@20 -- # IFS=: 00:06:46.585 07:12:48 -- accel/accel.sh@20 -- # read -r var val 00:06:46.585 07:12:48 -- accel/accel.sh@21 -- # val= 00:06:46.585 07:12:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.585 07:12:48 -- accel/accel.sh@20 -- # IFS=: 00:06:46.585 07:12:48 -- accel/accel.sh@20 -- # read -r var val 00:06:46.585 07:12:48 -- accel/accel.sh@21 -- # val= 00:06:46.585 07:12:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.585 07:12:48 -- accel/accel.sh@20 -- # IFS=: 00:06:46.585 07:12:48 -- accel/accel.sh@20 -- # read -r var val 00:06:46.585 07:12:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:46.585 07:12:48 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:46.585 07:12:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.585 00:06:46.585 real 0m2.808s 00:06:46.585 user 0m2.396s 00:06:46.585 sys 0m0.209s 00:06:46.585 07:12:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.585 ************************************ 00:06:46.585 END TEST accel_dualcast 00:06:46.585 ************************************ 00:06:46.585 07:12:48 -- common/autotest_common.sh@10 -- # set +x 00:06:46.844 07:12:48 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:46.844 07:12:48 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:46.844 07:12:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:46.844 07:12:48 -- common/autotest_common.sh@10 -- # set +x 00:06:46.844 ************************************ 00:06:46.844 START TEST accel_compare 00:06:46.844 ************************************ 00:06:46.844 07:12:48 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:06:46.844 07:12:48 -- accel/accel.sh@16 -- # local accel_opc 00:06:46.844 07:12:48 -- accel/accel.sh@17 -- # local accel_module 00:06:46.844 07:12:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:46.844 07:12:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:46.844 07:12:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.844 07:12:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.844 07:12:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.844 07:12:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.844 07:12:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.844 07:12:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.844 07:12:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.844 07:12:48 -- accel/accel.sh@42 -- # jq -r . 00:06:46.844 [2024-11-04 07:12:48.471538] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:46.844 [2024-11-04 07:12:48.471784] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70772 ] 00:06:46.844 [2024-11-04 07:12:48.608899] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.844 [2024-11-04 07:12:48.668461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.219 07:12:49 -- accel/accel.sh@18 -- # out=' 00:06:48.219 SPDK Configuration: 00:06:48.219 Core mask: 0x1 00:06:48.219 00:06:48.219 Accel Perf Configuration: 00:06:48.219 Workload Type: compare 00:06:48.219 Transfer size: 4096 bytes 00:06:48.219 Vector count 1 00:06:48.219 Module: software 00:06:48.219 Queue depth: 32 00:06:48.219 Allocate depth: 32 00:06:48.219 # threads/core: 1 00:06:48.219 Run time: 1 seconds 00:06:48.219 Verify: Yes 00:06:48.219 00:06:48.219 Running for 1 seconds... 00:06:48.219 00:06:48.219 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:48.219 ------------------------------------------------------------------------------------ 00:06:48.219 0,0 566080/s 2211 MiB/s 0 0 00:06:48.219 ==================================================================================== 00:06:48.219 Total 566080/s 2211 MiB/s 0 0' 00:06:48.219 07:12:49 -- accel/accel.sh@20 -- # IFS=: 00:06:48.219 07:12:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:48.219 07:12:49 -- accel/accel.sh@20 -- # read -r var val 00:06:48.219 07:12:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:48.219 07:12:49 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.219 07:12:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.219 07:12:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.219 07:12:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.219 07:12:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.219 07:12:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.219 07:12:49 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.219 07:12:49 -- accel/accel.sh@42 -- # jq -r . 00:06:48.219 [2024-11-04 07:12:49.875670] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:48.220 [2024-11-04 07:12:49.875766] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70792 ] 00:06:48.220 [2024-11-04 07:12:50.008215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.479 [2024-11-04 07:12:50.083090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.479 07:12:50 -- accel/accel.sh@21 -- # val= 00:06:48.479 07:12:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.479 07:12:50 -- accel/accel.sh@20 -- # IFS=: 00:06:48.479 07:12:50 -- accel/accel.sh@20 -- # read -r var val 00:06:48.479 07:12:50 -- accel/accel.sh@21 -- # val= 00:06:48.479 07:12:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.479 07:12:50 -- accel/accel.sh@20 -- # IFS=: 00:06:48.479 07:12:50 -- accel/accel.sh@20 -- # read -r var val 00:06:48.479 07:12:50 -- accel/accel.sh@21 -- # val=0x1 00:06:48.479 07:12:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.479 07:12:50 -- accel/accel.sh@20 -- # IFS=: 00:06:48.479 07:12:50 -- accel/accel.sh@20 -- # read -r var val 00:06:48.479 07:12:50 -- accel/accel.sh@21 -- # val= 00:06:48.479 07:12:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.479 07:12:50 -- accel/accel.sh@20 -- # IFS=: 00:06:48.479 07:12:50 -- accel/accel.sh@20 -- # read -r var val 00:06:48.479 07:12:50 -- accel/accel.sh@21 -- # val= 00:06:48.479 07:12:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.479 07:12:50 -- accel/accel.sh@20 -- # IFS=: 00:06:48.479 07:12:50 -- accel/accel.sh@20 -- # read -r var val 00:06:48.479 07:12:50 -- accel/accel.sh@21 -- # val=compare 00:06:48.479 07:12:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.479 07:12:50 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:48.479 07:12:50 -- accel/accel.sh@20 -- # IFS=: 00:06:48.479 07:12:50 -- accel/accel.sh@20 -- # read -r var val 00:06:48.479 07:12:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:48.479 07:12:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.479 07:12:50 -- accel/accel.sh@20 -- # IFS=: 00:06:48.479 07:12:50 -- accel/accel.sh@20 -- # read -r var val 00:06:48.479 07:12:50 -- accel/accel.sh@21 -- # val= 00:06:48.479 07:12:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.479 07:12:50 -- accel/accel.sh@20 -- # IFS=: 00:06:48.479 07:12:50 -- accel/accel.sh@20 -- # read -r var val 00:06:48.479 07:12:50 -- accel/accel.sh@21 -- # val=software 00:06:48.479 07:12:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.479 07:12:50 -- accel/accel.sh@23 -- # accel_module=software 00:06:48.479 07:12:50 -- accel/accel.sh@20 -- # IFS=: 00:06:48.479 07:12:50 -- accel/accel.sh@20 -- # read -r var val 00:06:48.479 07:12:50 -- accel/accel.sh@21 -- # val=32 00:06:48.479 07:12:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.479 07:12:50 -- accel/accel.sh@20 -- # IFS=: 00:06:48.479 07:12:50 -- accel/accel.sh@20 -- # read -r var val 00:06:48.479 07:12:50 -- accel/accel.sh@21 -- # val=32 00:06:48.479 07:12:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.479 07:12:50 -- accel/accel.sh@20 -- # IFS=: 00:06:48.479 07:12:50 -- accel/accel.sh@20 -- # read -r var val 00:06:48.479 07:12:50 -- accel/accel.sh@21 -- # val=1 00:06:48.479 07:12:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.479 07:12:50 -- accel/accel.sh@20 -- # IFS=: 00:06:48.479 07:12:50 -- accel/accel.sh@20 -- # read -r var val 00:06:48.479 07:12:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:48.479 07:12:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.479 07:12:50 -- accel/accel.sh@20 -- # IFS=: 00:06:48.479 07:12:50 -- accel/accel.sh@20 -- # read -r var val 00:06:48.479 07:12:50 -- accel/accel.sh@21 -- # val=Yes 00:06:48.479 07:12:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.479 07:12:50 -- accel/accel.sh@20 -- # IFS=: 00:06:48.479 07:12:50 -- accel/accel.sh@20 -- # read -r var val 00:06:48.479 07:12:50 -- accel/accel.sh@21 -- # val= 00:06:48.479 07:12:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.479 07:12:50 -- accel/accel.sh@20 -- # IFS=: 00:06:48.479 07:12:50 -- accel/accel.sh@20 -- # read -r var val 00:06:48.479 07:12:50 -- accel/accel.sh@21 -- # val= 00:06:48.479 07:12:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.479 07:12:50 -- accel/accel.sh@20 -- # IFS=: 00:06:48.479 07:12:50 -- accel/accel.sh@20 -- # read -r var val 00:06:49.855 07:12:51 -- accel/accel.sh@21 -- # val= 00:06:49.855 07:12:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.855 07:12:51 -- accel/accel.sh@20 -- # IFS=: 00:06:49.855 07:12:51 -- accel/accel.sh@20 -- # read -r var val 00:06:49.855 07:12:51 -- accel/accel.sh@21 -- # val= 00:06:49.856 07:12:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.856 07:12:51 -- accel/accel.sh@20 -- # IFS=: 00:06:49.856 07:12:51 -- accel/accel.sh@20 -- # read -r var val 00:06:49.856 07:12:51 -- accel/accel.sh@21 -- # val= 00:06:49.856 07:12:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.856 07:12:51 -- accel/accel.sh@20 -- # IFS=: 00:06:49.856 07:12:51 -- accel/accel.sh@20 -- # read -r var val 00:06:49.856 07:12:51 -- accel/accel.sh@21 -- # val= 00:06:49.856 07:12:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.856 07:12:51 -- accel/accel.sh@20 -- # IFS=: 00:06:49.856 07:12:51 -- accel/accel.sh@20 -- # read -r var val 00:06:49.856 07:12:51 -- accel/accel.sh@21 -- # val= 00:06:49.856 07:12:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.856 07:12:51 -- accel/accel.sh@20 -- # IFS=: 00:06:49.856 07:12:51 -- accel/accel.sh@20 -- # read -r var val 00:06:49.856 07:12:51 -- accel/accel.sh@21 -- # val= 00:06:49.856 07:12:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.856 07:12:51 -- accel/accel.sh@20 -- # IFS=: 00:06:49.856 07:12:51 -- accel/accel.sh@20 -- # read -r var val 00:06:49.856 07:12:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:49.856 07:12:51 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:49.856 07:12:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.856 00:06:49.856 real 0m2.829s 00:06:49.856 user 0m2.399s 00:06:49.856 sys 0m0.226s 00:06:49.856 ************************************ 00:06:49.856 END TEST accel_compare 00:06:49.856 ************************************ 00:06:49.856 07:12:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.856 07:12:51 -- common/autotest_common.sh@10 -- # set +x 00:06:49.856 07:12:51 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:49.856 07:12:51 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:49.856 07:12:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:49.856 07:12:51 -- common/autotest_common.sh@10 -- # set +x 00:06:49.856 ************************************ 00:06:49.856 START TEST accel_xor 00:06:49.856 ************************************ 00:06:49.856 07:12:51 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:06:49.856 07:12:51 -- accel/accel.sh@16 -- # local accel_opc 00:06:49.856 07:12:51 -- accel/accel.sh@17 -- # local accel_module 00:06:49.856 07:12:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:49.856 07:12:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:49.856 07:12:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.856 07:12:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.856 07:12:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.856 07:12:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.856 07:12:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.856 07:12:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.856 07:12:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.856 07:12:51 -- accel/accel.sh@42 -- # jq -r . 00:06:49.856 [2024-11-04 07:12:51.357924] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:49.856 [2024-11-04 07:12:51.358024] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70827 ] 00:06:49.856 [2024-11-04 07:12:51.491584] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.856 [2024-11-04 07:12:51.545666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.232 07:12:52 -- accel/accel.sh@18 -- # out=' 00:06:51.233 SPDK Configuration: 00:06:51.233 Core mask: 0x1 00:06:51.233 00:06:51.233 Accel Perf Configuration: 00:06:51.233 Workload Type: xor 00:06:51.233 Source buffers: 2 00:06:51.233 Transfer size: 4096 bytes 00:06:51.233 Vector count 1 00:06:51.233 Module: software 00:06:51.233 Queue depth: 32 00:06:51.233 Allocate depth: 32 00:06:51.233 # threads/core: 1 00:06:51.233 Run time: 1 seconds 00:06:51.233 Verify: Yes 00:06:51.233 00:06:51.233 Running for 1 seconds... 00:06:51.233 00:06:51.233 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:51.233 ------------------------------------------------------------------------------------ 00:06:51.233 0,0 290880/s 1136 MiB/s 0 0 00:06:51.233 ==================================================================================== 00:06:51.233 Total 290880/s 1136 MiB/s 0 0' 00:06:51.233 07:12:52 -- accel/accel.sh@20 -- # IFS=: 00:06:51.233 07:12:52 -- accel/accel.sh@20 -- # read -r var val 00:06:51.233 07:12:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:51.233 07:12:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:51.233 07:12:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.233 07:12:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.233 07:12:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.233 07:12:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.233 07:12:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.233 07:12:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.233 07:12:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.233 07:12:52 -- accel/accel.sh@42 -- # jq -r . 00:06:51.233 [2024-11-04 07:12:52.748541] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:51.233 [2024-11-04 07:12:52.748628] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70847 ] 00:06:51.233 [2024-11-04 07:12:52.876908] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.233 [2024-11-04 07:12:52.926663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.233 07:12:52 -- accel/accel.sh@21 -- # val= 00:06:51.233 07:12:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.233 07:12:52 -- accel/accel.sh@20 -- # IFS=: 00:06:51.233 07:12:52 -- accel/accel.sh@20 -- # read -r var val 00:06:51.233 07:12:52 -- accel/accel.sh@21 -- # val= 00:06:51.233 07:12:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.233 07:12:52 -- accel/accel.sh@20 -- # IFS=: 00:06:51.233 07:12:52 -- accel/accel.sh@20 -- # read -r var val 00:06:51.233 07:12:52 -- accel/accel.sh@21 -- # val=0x1 00:06:51.233 07:12:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.233 07:12:52 -- accel/accel.sh@20 -- # IFS=: 00:06:51.233 07:12:52 -- accel/accel.sh@20 -- # read -r var val 00:06:51.233 07:12:52 -- accel/accel.sh@21 -- # val= 00:06:51.233 07:12:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.233 07:12:52 -- accel/accel.sh@20 -- # IFS=: 00:06:51.233 07:12:52 -- accel/accel.sh@20 -- # read -r var val 00:06:51.233 07:12:52 -- accel/accel.sh@21 -- # val= 00:06:51.233 07:12:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.233 07:12:52 -- accel/accel.sh@20 -- # IFS=: 00:06:51.233 07:12:52 -- accel/accel.sh@20 -- # read -r var val 00:06:51.233 07:12:52 -- accel/accel.sh@21 -- # val=xor 00:06:51.233 07:12:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.233 07:12:52 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:51.233 07:12:52 -- accel/accel.sh@20 -- # IFS=: 00:06:51.233 07:12:52 -- accel/accel.sh@20 -- # read -r var val 00:06:51.233 07:12:52 -- accel/accel.sh@21 -- # val=2 00:06:51.233 07:12:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.233 07:12:52 -- accel/accel.sh@20 -- # IFS=: 00:06:51.233 07:12:52 -- accel/accel.sh@20 -- # read -r var val 00:06:51.233 07:12:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:51.233 07:12:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.233 07:12:52 -- accel/accel.sh@20 -- # IFS=: 00:06:51.233 07:12:52 -- accel/accel.sh@20 -- # read -r var val 00:06:51.233 07:12:52 -- accel/accel.sh@21 -- # val= 00:06:51.233 07:12:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.233 07:12:52 -- accel/accel.sh@20 -- # IFS=: 00:06:51.233 07:12:52 -- accel/accel.sh@20 -- # read -r var val 00:06:51.233 07:12:52 -- accel/accel.sh@21 -- # val=software 00:06:51.233 07:12:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.233 07:12:52 -- accel/accel.sh@23 -- # accel_module=software 00:06:51.233 07:12:52 -- accel/accel.sh@20 -- # IFS=: 00:06:51.233 07:12:52 -- accel/accel.sh@20 -- # read -r var val 00:06:51.233 07:12:52 -- accel/accel.sh@21 -- # val=32 00:06:51.233 07:12:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.233 07:12:52 -- accel/accel.sh@20 -- # IFS=: 00:06:51.233 07:12:52 -- accel/accel.sh@20 -- # read -r var val 00:06:51.233 07:12:52 -- accel/accel.sh@21 -- # val=32 00:06:51.233 07:12:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.233 07:12:52 -- accel/accel.sh@20 -- # IFS=: 00:06:51.233 07:12:52 -- accel/accel.sh@20 -- # read -r var val 00:06:51.233 07:12:52 -- accel/accel.sh@21 -- # val=1 00:06:51.233 07:12:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.233 07:12:52 -- accel/accel.sh@20 -- # IFS=: 00:06:51.233 07:12:52 -- accel/accel.sh@20 -- # read -r var val 00:06:51.233 07:12:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:51.233 07:12:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.233 07:12:52 -- accel/accel.sh@20 -- # IFS=: 00:06:51.233 07:12:52 -- accel/accel.sh@20 -- # read -r var val 00:06:51.233 07:12:52 -- accel/accel.sh@21 -- # val=Yes 00:06:51.233 07:12:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.233 07:12:52 -- accel/accel.sh@20 -- # IFS=: 00:06:51.233 07:12:52 -- accel/accel.sh@20 -- # read -r var val 00:06:51.233 07:12:52 -- accel/accel.sh@21 -- # val= 00:06:51.233 07:12:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.233 07:12:52 -- accel/accel.sh@20 -- # IFS=: 00:06:51.233 07:12:52 -- accel/accel.sh@20 -- # read -r var val 00:06:51.233 07:12:52 -- accel/accel.sh@21 -- # val= 00:06:51.233 07:12:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.233 07:12:52 -- accel/accel.sh@20 -- # IFS=: 00:06:51.233 07:12:52 -- accel/accel.sh@20 -- # read -r var val 00:06:52.610 07:12:54 -- accel/accel.sh@21 -- # val= 00:06:52.610 07:12:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.610 07:12:54 -- accel/accel.sh@20 -- # IFS=: 00:06:52.610 07:12:54 -- accel/accel.sh@20 -- # read -r var val 00:06:52.610 07:12:54 -- accel/accel.sh@21 -- # val= 00:06:52.610 07:12:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.610 07:12:54 -- accel/accel.sh@20 -- # IFS=: 00:06:52.610 07:12:54 -- accel/accel.sh@20 -- # read -r var val 00:06:52.610 07:12:54 -- accel/accel.sh@21 -- # val= 00:06:52.610 07:12:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.610 07:12:54 -- accel/accel.sh@20 -- # IFS=: 00:06:52.610 07:12:54 -- accel/accel.sh@20 -- # read -r var val 00:06:52.610 07:12:54 -- accel/accel.sh@21 -- # val= 00:06:52.610 07:12:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.610 07:12:54 -- accel/accel.sh@20 -- # IFS=: 00:06:52.610 07:12:54 -- accel/accel.sh@20 -- # read -r var val 00:06:52.610 07:12:54 -- accel/accel.sh@21 -- # val= 00:06:52.610 07:12:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.610 07:12:54 -- accel/accel.sh@20 -- # IFS=: 00:06:52.610 07:12:54 -- accel/accel.sh@20 -- # read -r var val 00:06:52.610 07:12:54 -- accel/accel.sh@21 -- # val= 00:06:52.610 07:12:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.610 07:12:54 -- accel/accel.sh@20 -- # IFS=: 00:06:52.610 07:12:54 -- accel/accel.sh@20 -- # read -r var val 00:06:52.610 07:12:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:52.610 07:12:54 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:52.610 07:12:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.610 00:06:52.610 real 0m2.784s 00:06:52.610 user 0m2.367s 00:06:52.610 sys 0m0.217s 00:06:52.610 07:12:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.610 07:12:54 -- common/autotest_common.sh@10 -- # set +x 00:06:52.610 ************************************ 00:06:52.610 END TEST accel_xor 00:06:52.610 ************************************ 00:06:52.610 07:12:54 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:52.610 07:12:54 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:52.610 07:12:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:52.610 07:12:54 -- common/autotest_common.sh@10 -- # set +x 00:06:52.610 ************************************ 00:06:52.610 START TEST accel_xor 00:06:52.610 ************************************ 00:06:52.610 07:12:54 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:06:52.610 07:12:54 -- accel/accel.sh@16 -- # local accel_opc 00:06:52.610 07:12:54 -- accel/accel.sh@17 -- # local accel_module 00:06:52.610 07:12:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:52.610 07:12:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:52.610 07:12:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.610 07:12:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.610 07:12:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.610 07:12:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.610 07:12:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.610 07:12:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.610 07:12:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.610 07:12:54 -- accel/accel.sh@42 -- # jq -r . 00:06:52.610 [2024-11-04 07:12:54.200391] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:52.610 [2024-11-04 07:12:54.200636] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70876 ] 00:06:52.610 [2024-11-04 07:12:54.338862] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.610 [2024-11-04 07:12:54.399746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.987 07:12:55 -- accel/accel.sh@18 -- # out=' 00:06:53.987 SPDK Configuration: 00:06:53.987 Core mask: 0x1 00:06:53.987 00:06:53.987 Accel Perf Configuration: 00:06:53.987 Workload Type: xor 00:06:53.987 Source buffers: 3 00:06:53.987 Transfer size: 4096 bytes 00:06:53.987 Vector count 1 00:06:53.987 Module: software 00:06:53.987 Queue depth: 32 00:06:53.987 Allocate depth: 32 00:06:53.987 # threads/core: 1 00:06:53.987 Run time: 1 seconds 00:06:53.987 Verify: Yes 00:06:53.987 00:06:53.987 Running for 1 seconds... 00:06:53.987 00:06:53.987 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:53.987 ------------------------------------------------------------------------------------ 00:06:53.987 0,0 275552/s 1076 MiB/s 0 0 00:06:53.987 ==================================================================================== 00:06:53.987 Total 275552/s 1076 MiB/s 0 0' 00:06:53.987 07:12:55 -- accel/accel.sh@20 -- # IFS=: 00:06:53.987 07:12:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:53.987 07:12:55 -- accel/accel.sh@20 -- # read -r var val 00:06:53.987 07:12:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:53.987 07:12:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.987 07:12:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.987 07:12:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.987 07:12:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.987 07:12:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.987 07:12:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.987 07:12:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.987 07:12:55 -- accel/accel.sh@42 -- # jq -r . 00:06:53.987 [2024-11-04 07:12:55.611036] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:53.987 [2024-11-04 07:12:55.611137] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70895 ] 00:06:53.987 [2024-11-04 07:12:55.747774] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.988 [2024-11-04 07:12:55.802043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.246 07:12:55 -- accel/accel.sh@21 -- # val= 00:06:54.246 07:12:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.246 07:12:55 -- accel/accel.sh@20 -- # IFS=: 00:06:54.246 07:12:55 -- accel/accel.sh@20 -- # read -r var val 00:06:54.246 07:12:55 -- accel/accel.sh@21 -- # val= 00:06:54.246 07:12:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.246 07:12:55 -- accel/accel.sh@20 -- # IFS=: 00:06:54.246 07:12:55 -- accel/accel.sh@20 -- # read -r var val 00:06:54.246 07:12:55 -- accel/accel.sh@21 -- # val=0x1 00:06:54.246 07:12:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.247 07:12:55 -- accel/accel.sh@20 -- # IFS=: 00:06:54.247 07:12:55 -- accel/accel.sh@20 -- # read -r var val 00:06:54.247 07:12:55 -- accel/accel.sh@21 -- # val= 00:06:54.247 07:12:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.247 07:12:55 -- accel/accel.sh@20 -- # IFS=: 00:06:54.247 07:12:55 -- accel/accel.sh@20 -- # read -r var val 00:06:54.247 07:12:55 -- accel/accel.sh@21 -- # val= 00:06:54.247 07:12:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.247 07:12:55 -- accel/accel.sh@20 -- # IFS=: 00:06:54.247 07:12:55 -- accel/accel.sh@20 -- # read -r var val 00:06:54.247 07:12:55 -- accel/accel.sh@21 -- # val=xor 00:06:54.247 07:12:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.247 07:12:55 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:54.247 07:12:55 -- accel/accel.sh@20 -- # IFS=: 00:06:54.247 07:12:55 -- accel/accel.sh@20 -- # read -r var val 00:06:54.247 07:12:55 -- accel/accel.sh@21 -- # val=3 00:06:54.247 07:12:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.247 07:12:55 -- accel/accel.sh@20 -- # IFS=: 00:06:54.247 07:12:55 -- accel/accel.sh@20 -- # read -r var val 00:06:54.247 07:12:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:54.247 07:12:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.247 07:12:55 -- accel/accel.sh@20 -- # IFS=: 00:06:54.247 07:12:55 -- accel/accel.sh@20 -- # read -r var val 00:06:54.247 07:12:55 -- accel/accel.sh@21 -- # val= 00:06:54.247 07:12:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.247 07:12:55 -- accel/accel.sh@20 -- # IFS=: 00:06:54.247 07:12:55 -- accel/accel.sh@20 -- # read -r var val 00:06:54.247 07:12:55 -- accel/accel.sh@21 -- # val=software 00:06:54.247 07:12:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.247 07:12:55 -- accel/accel.sh@23 -- # accel_module=software 00:06:54.247 07:12:55 -- accel/accel.sh@20 -- # IFS=: 00:06:54.247 07:12:55 -- accel/accel.sh@20 -- # read -r var val 00:06:54.247 07:12:55 -- accel/accel.sh@21 -- # val=32 00:06:54.247 07:12:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.247 07:12:55 -- accel/accel.sh@20 -- # IFS=: 00:06:54.247 07:12:55 -- accel/accel.sh@20 -- # read -r var val 00:06:54.247 07:12:55 -- accel/accel.sh@21 -- # val=32 00:06:54.247 07:12:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.247 07:12:55 -- accel/accel.sh@20 -- # IFS=: 00:06:54.247 07:12:55 -- accel/accel.sh@20 -- # read -r var val 00:06:54.247 07:12:55 -- accel/accel.sh@21 -- # val=1 00:06:54.247 07:12:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.247 07:12:55 -- accel/accel.sh@20 -- # IFS=: 00:06:54.247 07:12:55 -- accel/accel.sh@20 -- # read -r var val 00:06:54.247 07:12:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:54.247 07:12:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.247 07:12:55 -- accel/accel.sh@20 -- # IFS=: 00:06:54.247 07:12:55 -- accel/accel.sh@20 -- # read -r var val 00:06:54.247 07:12:55 -- accel/accel.sh@21 -- # val=Yes 00:06:54.247 07:12:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.247 07:12:55 -- accel/accel.sh@20 -- # IFS=: 00:06:54.247 07:12:55 -- accel/accel.sh@20 -- # read -r var val 00:06:54.247 07:12:55 -- accel/accel.sh@21 -- # val= 00:06:54.247 07:12:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.247 07:12:55 -- accel/accel.sh@20 -- # IFS=: 00:06:54.247 07:12:55 -- accel/accel.sh@20 -- # read -r var val 00:06:54.247 07:12:55 -- accel/accel.sh@21 -- # val= 00:06:54.247 07:12:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.247 07:12:55 -- accel/accel.sh@20 -- # IFS=: 00:06:54.247 07:12:55 -- accel/accel.sh@20 -- # read -r var val 00:06:55.183 07:12:56 -- accel/accel.sh@21 -- # val= 00:06:55.183 07:12:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.183 07:12:56 -- accel/accel.sh@20 -- # IFS=: 00:06:55.183 07:12:56 -- accel/accel.sh@20 -- # read -r var val 00:06:55.183 07:12:56 -- accel/accel.sh@21 -- # val= 00:06:55.183 07:12:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.183 07:12:56 -- accel/accel.sh@20 -- # IFS=: 00:06:55.183 07:12:56 -- accel/accel.sh@20 -- # read -r var val 00:06:55.183 07:12:56 -- accel/accel.sh@21 -- # val= 00:06:55.183 07:12:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.183 07:12:56 -- accel/accel.sh@20 -- # IFS=: 00:06:55.183 07:12:56 -- accel/accel.sh@20 -- # read -r var val 00:06:55.183 07:12:56 -- accel/accel.sh@21 -- # val= 00:06:55.183 07:12:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.183 07:12:56 -- accel/accel.sh@20 -- # IFS=: 00:06:55.183 07:12:56 -- accel/accel.sh@20 -- # read -r var val 00:06:55.183 07:12:56 -- accel/accel.sh@21 -- # val= 00:06:55.183 07:12:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.183 07:12:56 -- accel/accel.sh@20 -- # IFS=: 00:06:55.183 07:12:56 -- accel/accel.sh@20 -- # read -r var val 00:06:55.183 07:12:56 -- accel/accel.sh@21 -- # val= 00:06:55.183 ************************************ 00:06:55.183 END TEST accel_xor 00:06:55.183 ************************************ 00:06:55.183 07:12:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.183 07:12:56 -- accel/accel.sh@20 -- # IFS=: 00:06:55.183 07:12:56 -- accel/accel.sh@20 -- # read -r var val 00:06:55.183 07:12:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:55.183 07:12:56 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:55.183 07:12:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.183 00:06:55.183 real 0m2.813s 00:06:55.183 user 0m2.388s 00:06:55.183 sys 0m0.223s 00:06:55.183 07:12:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.183 07:12:56 -- common/autotest_common.sh@10 -- # set +x 00:06:55.442 07:12:57 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:55.442 07:12:57 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:55.442 07:12:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:55.442 07:12:57 -- common/autotest_common.sh@10 -- # set +x 00:06:55.442 ************************************ 00:06:55.442 START TEST accel_dif_verify 00:06:55.442 ************************************ 00:06:55.442 07:12:57 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:06:55.442 07:12:57 -- accel/accel.sh@16 -- # local accel_opc 00:06:55.442 07:12:57 -- accel/accel.sh@17 -- # local accel_module 00:06:55.442 07:12:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:55.442 07:12:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:55.442 07:12:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.442 07:12:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.442 07:12:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.442 07:12:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.442 07:12:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.442 07:12:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.442 07:12:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.442 07:12:57 -- accel/accel.sh@42 -- # jq -r . 00:06:55.442 [2024-11-04 07:12:57.071539] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:55.442 [2024-11-04 07:12:57.071639] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70930 ] 00:06:55.442 [2024-11-04 07:12:57.210391] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.442 [2024-11-04 07:12:57.268833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.818 07:12:58 -- accel/accel.sh@18 -- # out=' 00:06:56.818 SPDK Configuration: 00:06:56.818 Core mask: 0x1 00:06:56.818 00:06:56.818 Accel Perf Configuration: 00:06:56.818 Workload Type: dif_verify 00:06:56.818 Vector size: 4096 bytes 00:06:56.818 Transfer size: 4096 bytes 00:06:56.818 Block size: 512 bytes 00:06:56.818 Metadata size: 8 bytes 00:06:56.818 Vector count 1 00:06:56.818 Module: software 00:06:56.818 Queue depth: 32 00:06:56.818 Allocate depth: 32 00:06:56.818 # threads/core: 1 00:06:56.818 Run time: 1 seconds 00:06:56.818 Verify: No 00:06:56.818 00:06:56.818 Running for 1 seconds... 00:06:56.818 00:06:56.818 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:56.818 ------------------------------------------------------------------------------------ 00:06:56.818 0,0 125312/s 497 MiB/s 0 0 00:06:56.818 ==================================================================================== 00:06:56.818 Total 125312/s 489 MiB/s 0 0' 00:06:56.818 07:12:58 -- accel/accel.sh@20 -- # IFS=: 00:06:56.818 07:12:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:56.818 07:12:58 -- accel/accel.sh@20 -- # read -r var val 00:06:56.818 07:12:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:56.818 07:12:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.818 07:12:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.818 07:12:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.818 07:12:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.819 07:12:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.819 07:12:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.819 07:12:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.819 07:12:58 -- accel/accel.sh@42 -- # jq -r . 00:06:56.819 [2024-11-04 07:12:58.478748] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:56.819 [2024-11-04 07:12:58.478846] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70944 ] 00:06:56.819 [2024-11-04 07:12:58.612819] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.091 [2024-11-04 07:12:58.672109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.091 07:12:58 -- accel/accel.sh@21 -- # val= 00:06:57.091 07:12:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.091 07:12:58 -- accel/accel.sh@20 -- # IFS=: 00:06:57.091 07:12:58 -- accel/accel.sh@20 -- # read -r var val 00:06:57.091 07:12:58 -- accel/accel.sh@21 -- # val= 00:06:57.091 07:12:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.091 07:12:58 -- accel/accel.sh@20 -- # IFS=: 00:06:57.091 07:12:58 -- accel/accel.sh@20 -- # read -r var val 00:06:57.091 07:12:58 -- accel/accel.sh@21 -- # val=0x1 00:06:57.091 07:12:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.091 07:12:58 -- accel/accel.sh@20 -- # IFS=: 00:06:57.091 07:12:58 -- accel/accel.sh@20 -- # read -r var val 00:06:57.091 07:12:58 -- accel/accel.sh@21 -- # val= 00:06:57.091 07:12:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.091 07:12:58 -- accel/accel.sh@20 -- # IFS=: 00:06:57.091 07:12:58 -- accel/accel.sh@20 -- # read -r var val 00:06:57.091 07:12:58 -- accel/accel.sh@21 -- # val= 00:06:57.091 07:12:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.091 07:12:58 -- accel/accel.sh@20 -- # IFS=: 00:06:57.091 07:12:58 -- accel/accel.sh@20 -- # read -r var val 00:06:57.091 07:12:58 -- accel/accel.sh@21 -- # val=dif_verify 00:06:57.091 07:12:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.091 07:12:58 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:57.091 07:12:58 -- accel/accel.sh@20 -- # IFS=: 00:06:57.091 07:12:58 -- accel/accel.sh@20 -- # read -r var val 00:06:57.091 07:12:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:57.091 07:12:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.091 07:12:58 -- accel/accel.sh@20 -- # IFS=: 00:06:57.091 07:12:58 -- accel/accel.sh@20 -- # read -r var val 00:06:57.091 07:12:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:57.091 07:12:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.091 07:12:58 -- accel/accel.sh@20 -- # IFS=: 00:06:57.091 07:12:58 -- accel/accel.sh@20 -- # read -r var val 00:06:57.091 07:12:58 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:57.091 07:12:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.091 07:12:58 -- accel/accel.sh@20 -- # IFS=: 00:06:57.091 07:12:58 -- accel/accel.sh@20 -- # read -r var val 00:06:57.091 07:12:58 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:57.091 07:12:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.091 07:12:58 -- accel/accel.sh@20 -- # IFS=: 00:06:57.091 07:12:58 -- accel/accel.sh@20 -- # read -r var val 00:06:57.091 07:12:58 -- accel/accel.sh@21 -- # val= 00:06:57.091 07:12:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.091 07:12:58 -- accel/accel.sh@20 -- # IFS=: 00:06:57.091 07:12:58 -- accel/accel.sh@20 -- # read -r var val 00:06:57.091 07:12:58 -- accel/accel.sh@21 -- # val=software 00:06:57.091 07:12:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.091 07:12:58 -- accel/accel.sh@23 -- # accel_module=software 00:06:57.091 07:12:58 -- accel/accel.sh@20 -- # IFS=: 00:06:57.091 07:12:58 -- accel/accel.sh@20 -- # read -r var val 00:06:57.091 07:12:58 -- accel/accel.sh@21 -- # val=32 00:06:57.091 07:12:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.091 07:12:58 -- accel/accel.sh@20 -- # IFS=: 00:06:57.091 07:12:58 -- accel/accel.sh@20 -- # read -r var val 00:06:57.091 07:12:58 -- accel/accel.sh@21 -- # val=32 00:06:57.091 07:12:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.091 07:12:58 -- accel/accel.sh@20 -- # IFS=: 00:06:57.091 07:12:58 -- accel/accel.sh@20 -- # read -r var val 00:06:57.091 07:12:58 -- accel/accel.sh@21 -- # val=1 00:06:57.091 07:12:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.091 07:12:58 -- accel/accel.sh@20 -- # IFS=: 00:06:57.091 07:12:58 -- accel/accel.sh@20 -- # read -r var val 00:06:57.091 07:12:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:57.091 07:12:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.091 07:12:58 -- accel/accel.sh@20 -- # IFS=: 00:06:57.091 07:12:58 -- accel/accel.sh@20 -- # read -r var val 00:06:57.091 07:12:58 -- accel/accel.sh@21 -- # val=No 00:06:57.091 07:12:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.091 07:12:58 -- accel/accel.sh@20 -- # IFS=: 00:06:57.091 07:12:58 -- accel/accel.sh@20 -- # read -r var val 00:06:57.091 07:12:58 -- accel/accel.sh@21 -- # val= 00:06:57.091 07:12:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.091 07:12:58 -- accel/accel.sh@20 -- # IFS=: 00:06:57.091 07:12:58 -- accel/accel.sh@20 -- # read -r var val 00:06:57.091 07:12:58 -- accel/accel.sh@21 -- # val= 00:06:57.091 07:12:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.091 07:12:58 -- accel/accel.sh@20 -- # IFS=: 00:06:57.091 07:12:58 -- accel/accel.sh@20 -- # read -r var val 00:06:58.040 07:12:59 -- accel/accel.sh@21 -- # val= 00:06:58.040 07:12:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.040 07:12:59 -- accel/accel.sh@20 -- # IFS=: 00:06:58.040 07:12:59 -- accel/accel.sh@20 -- # read -r var val 00:06:58.040 07:12:59 -- accel/accel.sh@21 -- # val= 00:06:58.040 07:12:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.040 07:12:59 -- accel/accel.sh@20 -- # IFS=: 00:06:58.040 07:12:59 -- accel/accel.sh@20 -- # read -r var val 00:06:58.040 07:12:59 -- accel/accel.sh@21 -- # val= 00:06:58.040 07:12:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.040 07:12:59 -- accel/accel.sh@20 -- # IFS=: 00:06:58.040 07:12:59 -- accel/accel.sh@20 -- # read -r var val 00:06:58.040 07:12:59 -- accel/accel.sh@21 -- # val= 00:06:58.040 07:12:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.040 07:12:59 -- accel/accel.sh@20 -- # IFS=: 00:06:58.040 07:12:59 -- accel/accel.sh@20 -- # read -r var val 00:06:58.040 ************************************ 00:06:58.040 END TEST accel_dif_verify 00:06:58.040 ************************************ 00:06:58.040 07:12:59 -- accel/accel.sh@21 -- # val= 00:06:58.040 07:12:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.040 07:12:59 -- accel/accel.sh@20 -- # IFS=: 00:06:58.040 07:12:59 -- accel/accel.sh@20 -- # read -r var val 00:06:58.040 07:12:59 -- accel/accel.sh@21 -- # val= 00:06:58.040 07:12:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.040 07:12:59 -- accel/accel.sh@20 -- # IFS=: 00:06:58.040 07:12:59 -- accel/accel.sh@20 -- # read -r var val 00:06:58.040 07:12:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:58.040 07:12:59 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:58.040 07:12:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.040 00:06:58.040 real 0m2.817s 00:06:58.040 user 0m2.389s 00:06:58.040 sys 0m0.227s 00:06:58.040 07:12:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.040 07:12:59 -- common/autotest_common.sh@10 -- # set +x 00:06:58.299 07:12:59 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:58.299 07:12:59 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:58.299 07:12:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.299 07:12:59 -- common/autotest_common.sh@10 -- # set +x 00:06:58.299 ************************************ 00:06:58.299 START TEST accel_dif_generate 00:06:58.299 ************************************ 00:06:58.299 07:12:59 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:06:58.299 07:12:59 -- accel/accel.sh@16 -- # local accel_opc 00:06:58.299 07:12:59 -- accel/accel.sh@17 -- # local accel_module 00:06:58.299 07:12:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:58.299 07:12:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:58.299 07:12:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.299 07:12:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.299 07:12:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.299 07:12:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.299 07:12:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.299 07:12:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.299 07:12:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.299 07:12:59 -- accel/accel.sh@42 -- # jq -r . 00:06:58.299 [2024-11-04 07:12:59.940317] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:58.299 [2024-11-04 07:12:59.940405] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70984 ] 00:06:58.299 [2024-11-04 07:13:00.063711] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.299 [2024-11-04 07:13:00.122199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.678 07:13:01 -- accel/accel.sh@18 -- # out=' 00:06:59.678 SPDK Configuration: 00:06:59.678 Core mask: 0x1 00:06:59.678 00:06:59.678 Accel Perf Configuration: 00:06:59.678 Workload Type: dif_generate 00:06:59.678 Vector size: 4096 bytes 00:06:59.678 Transfer size: 4096 bytes 00:06:59.678 Block size: 512 bytes 00:06:59.678 Metadata size: 8 bytes 00:06:59.678 Vector count 1 00:06:59.678 Module: software 00:06:59.678 Queue depth: 32 00:06:59.678 Allocate depth: 32 00:06:59.678 # threads/core: 1 00:06:59.678 Run time: 1 seconds 00:06:59.678 Verify: No 00:06:59.678 00:06:59.678 Running for 1 seconds... 00:06:59.678 00:06:59.678 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:59.678 ------------------------------------------------------------------------------------ 00:06:59.678 0,0 152128/s 603 MiB/s 0 0 00:06:59.678 ==================================================================================== 00:06:59.678 Total 152128/s 594 MiB/s 0 0' 00:06:59.678 07:13:01 -- accel/accel.sh@20 -- # IFS=: 00:06:59.678 07:13:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:59.678 07:13:01 -- accel/accel.sh@20 -- # read -r var val 00:06:59.678 07:13:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:59.678 07:13:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.678 07:13:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.678 07:13:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.678 07:13:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.678 07:13:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.678 07:13:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.678 07:13:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.678 07:13:01 -- accel/accel.sh@42 -- # jq -r . 00:06:59.678 [2024-11-04 07:13:01.330695] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:59.678 [2024-11-04 07:13:01.330960] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70998 ] 00:06:59.678 [2024-11-04 07:13:01.466370] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.938 [2024-11-04 07:13:01.523349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.938 07:13:01 -- accel/accel.sh@21 -- # val= 00:06:59.938 07:13:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.938 07:13:01 -- accel/accel.sh@20 -- # IFS=: 00:06:59.938 07:13:01 -- accel/accel.sh@20 -- # read -r var val 00:06:59.938 07:13:01 -- accel/accel.sh@21 -- # val= 00:06:59.938 07:13:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.938 07:13:01 -- accel/accel.sh@20 -- # IFS=: 00:06:59.938 07:13:01 -- accel/accel.sh@20 -- # read -r var val 00:06:59.938 07:13:01 -- accel/accel.sh@21 -- # val=0x1 00:06:59.938 07:13:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.938 07:13:01 -- accel/accel.sh@20 -- # IFS=: 00:06:59.938 07:13:01 -- accel/accel.sh@20 -- # read -r var val 00:06:59.938 07:13:01 -- accel/accel.sh@21 -- # val= 00:06:59.938 07:13:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.938 07:13:01 -- accel/accel.sh@20 -- # IFS=: 00:06:59.938 07:13:01 -- accel/accel.sh@20 -- # read -r var val 00:06:59.938 07:13:01 -- accel/accel.sh@21 -- # val= 00:06:59.938 07:13:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.938 07:13:01 -- accel/accel.sh@20 -- # IFS=: 00:06:59.938 07:13:01 -- accel/accel.sh@20 -- # read -r var val 00:06:59.938 07:13:01 -- accel/accel.sh@21 -- # val=dif_generate 00:06:59.938 07:13:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.938 07:13:01 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:59.938 07:13:01 -- accel/accel.sh@20 -- # IFS=: 00:06:59.938 07:13:01 -- accel/accel.sh@20 -- # read -r var val 00:06:59.938 07:13:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:59.938 07:13:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.938 07:13:01 -- accel/accel.sh@20 -- # IFS=: 00:06:59.938 07:13:01 -- accel/accel.sh@20 -- # read -r var val 00:06:59.938 07:13:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:59.938 07:13:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.938 07:13:01 -- accel/accel.sh@20 -- # IFS=: 00:06:59.938 07:13:01 -- accel/accel.sh@20 -- # read -r var val 00:06:59.938 07:13:01 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:59.938 07:13:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.938 07:13:01 -- accel/accel.sh@20 -- # IFS=: 00:06:59.938 07:13:01 -- accel/accel.sh@20 -- # read -r var val 00:06:59.938 07:13:01 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:59.938 07:13:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.938 07:13:01 -- accel/accel.sh@20 -- # IFS=: 00:06:59.938 07:13:01 -- accel/accel.sh@20 -- # read -r var val 00:06:59.938 07:13:01 -- accel/accel.sh@21 -- # val= 00:06:59.938 07:13:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.938 07:13:01 -- accel/accel.sh@20 -- # IFS=: 00:06:59.938 07:13:01 -- accel/accel.sh@20 -- # read -r var val 00:06:59.938 07:13:01 -- accel/accel.sh@21 -- # val=software 00:06:59.938 07:13:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.938 07:13:01 -- accel/accel.sh@23 -- # accel_module=software 00:06:59.938 07:13:01 -- accel/accel.sh@20 -- # IFS=: 00:06:59.938 07:13:01 -- accel/accel.sh@20 -- # read -r var val 00:06:59.938 07:13:01 -- accel/accel.sh@21 -- # val=32 00:06:59.938 07:13:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.938 07:13:01 -- accel/accel.sh@20 -- # IFS=: 00:06:59.938 07:13:01 -- accel/accel.sh@20 -- # read -r var val 00:06:59.938 07:13:01 -- accel/accel.sh@21 -- # val=32 00:06:59.938 07:13:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.938 07:13:01 -- accel/accel.sh@20 -- # IFS=: 00:06:59.938 07:13:01 -- accel/accel.sh@20 -- # read -r var val 00:06:59.938 07:13:01 -- accel/accel.sh@21 -- # val=1 00:06:59.938 07:13:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.938 07:13:01 -- accel/accel.sh@20 -- # IFS=: 00:06:59.938 07:13:01 -- accel/accel.sh@20 -- # read -r var val 00:06:59.938 07:13:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:59.938 07:13:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.938 07:13:01 -- accel/accel.sh@20 -- # IFS=: 00:06:59.938 07:13:01 -- accel/accel.sh@20 -- # read -r var val 00:06:59.938 07:13:01 -- accel/accel.sh@21 -- # val=No 00:06:59.938 07:13:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.938 07:13:01 -- accel/accel.sh@20 -- # IFS=: 00:06:59.938 07:13:01 -- accel/accel.sh@20 -- # read -r var val 00:06:59.938 07:13:01 -- accel/accel.sh@21 -- # val= 00:06:59.938 07:13:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.938 07:13:01 -- accel/accel.sh@20 -- # IFS=: 00:06:59.938 07:13:01 -- accel/accel.sh@20 -- # read -r var val 00:06:59.938 07:13:01 -- accel/accel.sh@21 -- # val= 00:06:59.938 07:13:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.938 07:13:01 -- accel/accel.sh@20 -- # IFS=: 00:06:59.938 07:13:01 -- accel/accel.sh@20 -- # read -r var val 00:07:00.874 07:13:02 -- accel/accel.sh@21 -- # val= 00:07:00.874 07:13:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.874 07:13:02 -- accel/accel.sh@20 -- # IFS=: 00:07:00.874 07:13:02 -- accel/accel.sh@20 -- # read -r var val 00:07:00.874 07:13:02 -- accel/accel.sh@21 -- # val= 00:07:00.874 07:13:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.874 07:13:02 -- accel/accel.sh@20 -- # IFS=: 00:07:00.874 07:13:02 -- accel/accel.sh@20 -- # read -r var val 00:07:00.874 07:13:02 -- accel/accel.sh@21 -- # val= 00:07:00.874 07:13:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.874 07:13:02 -- accel/accel.sh@20 -- # IFS=: 00:07:00.874 07:13:02 -- accel/accel.sh@20 -- # read -r var val 00:07:00.874 07:13:02 -- accel/accel.sh@21 -- # val= 00:07:00.874 07:13:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.874 07:13:02 -- accel/accel.sh@20 -- # IFS=: 00:07:00.874 07:13:02 -- accel/accel.sh@20 -- # read -r var val 00:07:00.874 07:13:02 -- accel/accel.sh@21 -- # val= 00:07:00.874 07:13:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.874 07:13:02 -- accel/accel.sh@20 -- # IFS=: 00:07:00.874 07:13:02 -- accel/accel.sh@20 -- # read -r var val 00:07:00.874 07:13:02 -- accel/accel.sh@21 -- # val= 00:07:00.874 07:13:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.874 07:13:02 -- accel/accel.sh@20 -- # IFS=: 00:07:00.874 ************************************ 00:07:00.874 END TEST accel_dif_generate 00:07:00.874 ************************************ 00:07:00.874 07:13:02 -- accel/accel.sh@20 -- # read -r var val 00:07:00.874 07:13:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:00.874 07:13:02 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:00.874 07:13:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.874 00:07:00.874 real 0m2.786s 00:07:00.874 user 0m2.373s 00:07:00.874 sys 0m0.216s 00:07:00.874 07:13:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.874 07:13:02 -- common/autotest_common.sh@10 -- # set +x 00:07:01.133 07:13:02 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:01.133 07:13:02 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:01.133 07:13:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:01.133 07:13:02 -- common/autotest_common.sh@10 -- # set +x 00:07:01.133 ************************************ 00:07:01.133 START TEST accel_dif_generate_copy 00:07:01.133 ************************************ 00:07:01.133 07:13:02 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:07:01.133 07:13:02 -- accel/accel.sh@16 -- # local accel_opc 00:07:01.133 07:13:02 -- accel/accel.sh@17 -- # local accel_module 00:07:01.133 07:13:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:01.133 07:13:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:01.133 07:13:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.133 07:13:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.133 07:13:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.133 07:13:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.133 07:13:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.133 07:13:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.133 07:13:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.133 07:13:02 -- accel/accel.sh@42 -- # jq -r . 00:07:01.133 [2024-11-04 07:13:02.785800] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:01.133 [2024-11-04 07:13:02.786053] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71031 ] 00:07:01.133 [2024-11-04 07:13:02.923165] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.392 [2024-11-04 07:13:02.977443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.328 07:13:04 -- accel/accel.sh@18 -- # out=' 00:07:02.328 SPDK Configuration: 00:07:02.329 Core mask: 0x1 00:07:02.329 00:07:02.329 Accel Perf Configuration: 00:07:02.329 Workload Type: dif_generate_copy 00:07:02.329 Vector size: 4096 bytes 00:07:02.329 Transfer size: 4096 bytes 00:07:02.329 Vector count 1 00:07:02.329 Module: software 00:07:02.329 Queue depth: 32 00:07:02.329 Allocate depth: 32 00:07:02.329 # threads/core: 1 00:07:02.329 Run time: 1 seconds 00:07:02.329 Verify: No 00:07:02.329 00:07:02.329 Running for 1 seconds... 00:07:02.329 00:07:02.329 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:02.329 ------------------------------------------------------------------------------------ 00:07:02.329 0,0 117280/s 465 MiB/s 0 0 00:07:02.329 ==================================================================================== 00:07:02.329 Total 117280/s 458 MiB/s 0 0' 00:07:02.329 07:13:04 -- accel/accel.sh@20 -- # IFS=: 00:07:02.329 07:13:04 -- accel/accel.sh@20 -- # read -r var val 00:07:02.329 07:13:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:02.329 07:13:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:02.329 07:13:04 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.329 07:13:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.329 07:13:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.329 07:13:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.329 07:13:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.329 07:13:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.329 07:13:04 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.329 07:13:04 -- accel/accel.sh@42 -- # jq -r . 00:07:02.588 [2024-11-04 07:13:04.184671] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:02.588 [2024-11-04 07:13:04.184765] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71052 ] 00:07:02.588 [2024-11-04 07:13:04.320562] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.588 [2024-11-04 07:13:04.379969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.847 07:13:04 -- accel/accel.sh@21 -- # val= 00:07:02.847 07:13:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.847 07:13:04 -- accel/accel.sh@20 -- # IFS=: 00:07:02.847 07:13:04 -- accel/accel.sh@20 -- # read -r var val 00:07:02.847 07:13:04 -- accel/accel.sh@21 -- # val= 00:07:02.847 07:13:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.847 07:13:04 -- accel/accel.sh@20 -- # IFS=: 00:07:02.847 07:13:04 -- accel/accel.sh@20 -- # read -r var val 00:07:02.847 07:13:04 -- accel/accel.sh@21 -- # val=0x1 00:07:02.847 07:13:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.847 07:13:04 -- accel/accel.sh@20 -- # IFS=: 00:07:02.847 07:13:04 -- accel/accel.sh@20 -- # read -r var val 00:07:02.847 07:13:04 -- accel/accel.sh@21 -- # val= 00:07:02.847 07:13:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.847 07:13:04 -- accel/accel.sh@20 -- # IFS=: 00:07:02.847 07:13:04 -- accel/accel.sh@20 -- # read -r var val 00:07:02.847 07:13:04 -- accel/accel.sh@21 -- # val= 00:07:02.847 07:13:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.847 07:13:04 -- accel/accel.sh@20 -- # IFS=: 00:07:02.847 07:13:04 -- accel/accel.sh@20 -- # read -r var val 00:07:02.847 07:13:04 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:02.847 07:13:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.847 07:13:04 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:02.847 07:13:04 -- accel/accel.sh@20 -- # IFS=: 00:07:02.847 07:13:04 -- accel/accel.sh@20 -- # read -r var val 00:07:02.847 07:13:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:02.847 07:13:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.847 07:13:04 -- accel/accel.sh@20 -- # IFS=: 00:07:02.847 07:13:04 -- accel/accel.sh@20 -- # read -r var val 00:07:02.847 07:13:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:02.847 07:13:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.847 07:13:04 -- accel/accel.sh@20 -- # IFS=: 00:07:02.847 07:13:04 -- accel/accel.sh@20 -- # read -r var val 00:07:02.847 07:13:04 -- accel/accel.sh@21 -- # val= 00:07:02.847 07:13:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.847 07:13:04 -- accel/accel.sh@20 -- # IFS=: 00:07:02.847 07:13:04 -- accel/accel.sh@20 -- # read -r var val 00:07:02.847 07:13:04 -- accel/accel.sh@21 -- # val=software 00:07:02.847 07:13:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.847 07:13:04 -- accel/accel.sh@23 -- # accel_module=software 00:07:02.847 07:13:04 -- accel/accel.sh@20 -- # IFS=: 00:07:02.847 07:13:04 -- accel/accel.sh@20 -- # read -r var val 00:07:02.847 07:13:04 -- accel/accel.sh@21 -- # val=32 00:07:02.847 07:13:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.847 07:13:04 -- accel/accel.sh@20 -- # IFS=: 00:07:02.847 07:13:04 -- accel/accel.sh@20 -- # read -r var val 00:07:02.847 07:13:04 -- accel/accel.sh@21 -- # val=32 00:07:02.847 07:13:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.847 07:13:04 -- accel/accel.sh@20 -- # IFS=: 00:07:02.847 07:13:04 -- accel/accel.sh@20 -- # read -r var val 00:07:02.847 07:13:04 -- accel/accel.sh@21 -- # val=1 00:07:02.847 07:13:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.847 07:13:04 -- accel/accel.sh@20 -- # IFS=: 00:07:02.847 07:13:04 -- accel/accel.sh@20 -- # read -r var val 00:07:02.847 07:13:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:02.847 07:13:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.847 07:13:04 -- accel/accel.sh@20 -- # IFS=: 00:07:02.847 07:13:04 -- accel/accel.sh@20 -- # read -r var val 00:07:02.847 07:13:04 -- accel/accel.sh@21 -- # val=No 00:07:02.847 07:13:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.847 07:13:04 -- accel/accel.sh@20 -- # IFS=: 00:07:02.847 07:13:04 -- accel/accel.sh@20 -- # read -r var val 00:07:02.847 07:13:04 -- accel/accel.sh@21 -- # val= 00:07:02.847 07:13:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.847 07:13:04 -- accel/accel.sh@20 -- # IFS=: 00:07:02.847 07:13:04 -- accel/accel.sh@20 -- # read -r var val 00:07:02.847 07:13:04 -- accel/accel.sh@21 -- # val= 00:07:02.847 07:13:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.847 07:13:04 -- accel/accel.sh@20 -- # IFS=: 00:07:02.847 07:13:04 -- accel/accel.sh@20 -- # read -r var val 00:07:03.783 07:13:05 -- accel/accel.sh@21 -- # val= 00:07:03.783 07:13:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.783 07:13:05 -- accel/accel.sh@20 -- # IFS=: 00:07:03.783 07:13:05 -- accel/accel.sh@20 -- # read -r var val 00:07:03.783 07:13:05 -- accel/accel.sh@21 -- # val= 00:07:03.783 07:13:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.783 07:13:05 -- accel/accel.sh@20 -- # IFS=: 00:07:03.783 07:13:05 -- accel/accel.sh@20 -- # read -r var val 00:07:03.783 07:13:05 -- accel/accel.sh@21 -- # val= 00:07:03.783 07:13:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.783 07:13:05 -- accel/accel.sh@20 -- # IFS=: 00:07:03.783 07:13:05 -- accel/accel.sh@20 -- # read -r var val 00:07:03.783 07:13:05 -- accel/accel.sh@21 -- # val= 00:07:03.783 07:13:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.783 07:13:05 -- accel/accel.sh@20 -- # IFS=: 00:07:03.783 07:13:05 -- accel/accel.sh@20 -- # read -r var val 00:07:03.783 07:13:05 -- accel/accel.sh@21 -- # val= 00:07:03.783 07:13:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.783 07:13:05 -- accel/accel.sh@20 -- # IFS=: 00:07:03.783 07:13:05 -- accel/accel.sh@20 -- # read -r var val 00:07:03.783 07:13:05 -- accel/accel.sh@21 -- # val= 00:07:03.783 07:13:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.783 07:13:05 -- accel/accel.sh@20 -- # IFS=: 00:07:03.783 07:13:05 -- accel/accel.sh@20 -- # read -r var val 00:07:03.783 07:13:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:03.783 07:13:05 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:03.783 07:13:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.783 00:07:03.783 real 0m2.809s 00:07:03.783 user 0m2.382s 00:07:03.783 sys 0m0.222s 00:07:03.783 ************************************ 00:07:03.783 END TEST accel_dif_generate_copy 00:07:03.783 ************************************ 00:07:03.783 07:13:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.783 07:13:05 -- common/autotest_common.sh@10 -- # set +x 00:07:03.783 07:13:05 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:03.783 07:13:05 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:03.783 07:13:05 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:03.783 07:13:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:03.783 07:13:05 -- common/autotest_common.sh@10 -- # set +x 00:07:04.043 ************************************ 00:07:04.043 START TEST accel_comp 00:07:04.043 ************************************ 00:07:04.043 07:13:05 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:04.043 07:13:05 -- accel/accel.sh@16 -- # local accel_opc 00:07:04.043 07:13:05 -- accel/accel.sh@17 -- # local accel_module 00:07:04.043 07:13:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:04.043 07:13:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:04.043 07:13:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.043 07:13:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.043 07:13:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.043 07:13:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.043 07:13:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.043 07:13:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.043 07:13:05 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.043 07:13:05 -- accel/accel.sh@42 -- # jq -r . 00:07:04.043 [2024-11-04 07:13:05.650901] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:04.043 [2024-11-04 07:13:05.650998] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71081 ] 00:07:04.043 [2024-11-04 07:13:05.786210] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.043 [2024-11-04 07:13:05.847903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.422 07:13:07 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:05.423 00:07:05.423 SPDK Configuration: 00:07:05.423 Core mask: 0x1 00:07:05.423 00:07:05.423 Accel Perf Configuration: 00:07:05.423 Workload Type: compress 00:07:05.423 Transfer size: 4096 bytes 00:07:05.423 Vector count 1 00:07:05.423 Module: software 00:07:05.423 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:05.423 Queue depth: 32 00:07:05.423 Allocate depth: 32 00:07:05.423 # threads/core: 1 00:07:05.423 Run time: 1 seconds 00:07:05.423 Verify: No 00:07:05.423 00:07:05.423 Running for 1 seconds... 00:07:05.423 00:07:05.423 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:05.423 ------------------------------------------------------------------------------------ 00:07:05.423 0,0 59424/s 247 MiB/s 0 0 00:07:05.423 ==================================================================================== 00:07:05.423 Total 59424/s 232 MiB/s 0 0' 00:07:05.423 07:13:07 -- accel/accel.sh@20 -- # IFS=: 00:07:05.423 07:13:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:05.423 07:13:07 -- accel/accel.sh@20 -- # read -r var val 00:07:05.423 07:13:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.423 07:13:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:05.423 07:13:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.423 07:13:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.423 07:13:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.423 07:13:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.423 07:13:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.423 07:13:07 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.423 07:13:07 -- accel/accel.sh@42 -- # jq -r . 00:07:05.423 [2024-11-04 07:13:07.072517] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:05.423 [2024-11-04 07:13:07.072611] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71106 ] 00:07:05.423 [2024-11-04 07:13:07.205160] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.423 [2024-11-04 07:13:07.253740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.681 07:13:07 -- accel/accel.sh@21 -- # val= 00:07:05.681 07:13:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.681 07:13:07 -- accel/accel.sh@20 -- # IFS=: 00:07:05.681 07:13:07 -- accel/accel.sh@20 -- # read -r var val 00:07:05.681 07:13:07 -- accel/accel.sh@21 -- # val= 00:07:05.681 07:13:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.681 07:13:07 -- accel/accel.sh@20 -- # IFS=: 00:07:05.681 07:13:07 -- accel/accel.sh@20 -- # read -r var val 00:07:05.681 07:13:07 -- accel/accel.sh@21 -- # val= 00:07:05.681 07:13:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.681 07:13:07 -- accel/accel.sh@20 -- # IFS=: 00:07:05.681 07:13:07 -- accel/accel.sh@20 -- # read -r var val 00:07:05.681 07:13:07 -- accel/accel.sh@21 -- # val=0x1 00:07:05.681 07:13:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.681 07:13:07 -- accel/accel.sh@20 -- # IFS=: 00:07:05.681 07:13:07 -- accel/accel.sh@20 -- # read -r var val 00:07:05.681 07:13:07 -- accel/accel.sh@21 -- # val= 00:07:05.681 07:13:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.681 07:13:07 -- accel/accel.sh@20 -- # IFS=: 00:07:05.681 07:13:07 -- accel/accel.sh@20 -- # read -r var val 00:07:05.681 07:13:07 -- accel/accel.sh@21 -- # val= 00:07:05.681 07:13:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.681 07:13:07 -- accel/accel.sh@20 -- # IFS=: 00:07:05.681 07:13:07 -- accel/accel.sh@20 -- # read -r var val 00:07:05.681 07:13:07 -- accel/accel.sh@21 -- # val=compress 00:07:05.681 07:13:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.681 07:13:07 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:05.681 07:13:07 -- accel/accel.sh@20 -- # IFS=: 00:07:05.681 07:13:07 -- accel/accel.sh@20 -- # read -r var val 00:07:05.681 07:13:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:05.681 07:13:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.681 07:13:07 -- accel/accel.sh@20 -- # IFS=: 00:07:05.681 07:13:07 -- accel/accel.sh@20 -- # read -r var val 00:07:05.681 07:13:07 -- accel/accel.sh@21 -- # val= 00:07:05.681 07:13:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.681 07:13:07 -- accel/accel.sh@20 -- # IFS=: 00:07:05.681 07:13:07 -- accel/accel.sh@20 -- # read -r var val 00:07:05.681 07:13:07 -- accel/accel.sh@21 -- # val=software 00:07:05.681 07:13:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.681 07:13:07 -- accel/accel.sh@23 -- # accel_module=software 00:07:05.681 07:13:07 -- accel/accel.sh@20 -- # IFS=: 00:07:05.681 07:13:07 -- accel/accel.sh@20 -- # read -r var val 00:07:05.681 07:13:07 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:05.681 07:13:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.681 07:13:07 -- accel/accel.sh@20 -- # IFS=: 00:07:05.681 07:13:07 -- accel/accel.sh@20 -- # read -r var val 00:07:05.681 07:13:07 -- accel/accel.sh@21 -- # val=32 00:07:05.682 07:13:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.682 07:13:07 -- accel/accel.sh@20 -- # IFS=: 00:07:05.682 07:13:07 -- accel/accel.sh@20 -- # read -r var val 00:07:05.682 07:13:07 -- accel/accel.sh@21 -- # val=32 00:07:05.682 07:13:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.682 07:13:07 -- accel/accel.sh@20 -- # IFS=: 00:07:05.682 07:13:07 -- accel/accel.sh@20 -- # read -r var val 00:07:05.682 07:13:07 -- accel/accel.sh@21 -- # val=1 00:07:05.682 07:13:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.682 07:13:07 -- accel/accel.sh@20 -- # IFS=: 00:07:05.682 07:13:07 -- accel/accel.sh@20 -- # read -r var val 00:07:05.682 07:13:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:05.682 07:13:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.682 07:13:07 -- accel/accel.sh@20 -- # IFS=: 00:07:05.682 07:13:07 -- accel/accel.sh@20 -- # read -r var val 00:07:05.682 07:13:07 -- accel/accel.sh@21 -- # val=No 00:07:05.682 07:13:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.682 07:13:07 -- accel/accel.sh@20 -- # IFS=: 00:07:05.682 07:13:07 -- accel/accel.sh@20 -- # read -r var val 00:07:05.682 07:13:07 -- accel/accel.sh@21 -- # val= 00:07:05.682 07:13:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.682 07:13:07 -- accel/accel.sh@20 -- # IFS=: 00:07:05.682 07:13:07 -- accel/accel.sh@20 -- # read -r var val 00:07:05.682 07:13:07 -- accel/accel.sh@21 -- # val= 00:07:05.682 07:13:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.682 07:13:07 -- accel/accel.sh@20 -- # IFS=: 00:07:05.682 07:13:07 -- accel/accel.sh@20 -- # read -r var val 00:07:06.617 07:13:08 -- accel/accel.sh@21 -- # val= 00:07:06.617 07:13:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.617 07:13:08 -- accel/accel.sh@20 -- # IFS=: 00:07:06.617 07:13:08 -- accel/accel.sh@20 -- # read -r var val 00:07:06.617 07:13:08 -- accel/accel.sh@21 -- # val= 00:07:06.617 07:13:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.617 07:13:08 -- accel/accel.sh@20 -- # IFS=: 00:07:06.617 07:13:08 -- accel/accel.sh@20 -- # read -r var val 00:07:06.617 07:13:08 -- accel/accel.sh@21 -- # val= 00:07:06.617 07:13:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.617 07:13:08 -- accel/accel.sh@20 -- # IFS=: 00:07:06.617 07:13:08 -- accel/accel.sh@20 -- # read -r var val 00:07:06.617 07:13:08 -- accel/accel.sh@21 -- # val= 00:07:06.617 07:13:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.617 07:13:08 -- accel/accel.sh@20 -- # IFS=: 00:07:06.617 07:13:08 -- accel/accel.sh@20 -- # read -r var val 00:07:06.617 07:13:08 -- accel/accel.sh@21 -- # val= 00:07:06.617 07:13:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.617 07:13:08 -- accel/accel.sh@20 -- # IFS=: 00:07:06.618 07:13:08 -- accel/accel.sh@20 -- # read -r var val 00:07:06.618 07:13:08 -- accel/accel.sh@21 -- # val= 00:07:06.618 07:13:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.618 07:13:08 -- accel/accel.sh@20 -- # IFS=: 00:07:06.618 07:13:08 -- accel/accel.sh@20 -- # read -r var val 00:07:06.618 07:13:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:06.618 07:13:08 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:06.618 07:13:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.618 00:07:06.618 real 0m2.813s 00:07:06.618 user 0m2.379s 00:07:06.618 sys 0m0.233s 00:07:06.618 ************************************ 00:07:06.618 END TEST accel_comp 00:07:06.618 ************************************ 00:07:06.618 07:13:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.618 07:13:08 -- common/autotest_common.sh@10 -- # set +x 00:07:06.877 07:13:08 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:06.877 07:13:08 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:06.877 07:13:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:06.877 07:13:08 -- common/autotest_common.sh@10 -- # set +x 00:07:06.877 ************************************ 00:07:06.877 START TEST accel_decomp 00:07:06.877 ************************************ 00:07:06.877 07:13:08 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:06.877 07:13:08 -- accel/accel.sh@16 -- # local accel_opc 00:07:06.877 07:13:08 -- accel/accel.sh@17 -- # local accel_module 00:07:06.877 07:13:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:06.877 07:13:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:06.877 07:13:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.877 07:13:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.877 07:13:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.877 07:13:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.877 07:13:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.877 07:13:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.877 07:13:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.877 07:13:08 -- accel/accel.sh@42 -- # jq -r . 00:07:06.877 [2024-11-04 07:13:08.519417] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:06.877 [2024-11-04 07:13:08.519503] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71135 ] 00:07:06.877 [2024-11-04 07:13:08.647746] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.877 [2024-11-04 07:13:08.700078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.253 07:13:09 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:08.253 00:07:08.253 SPDK Configuration: 00:07:08.253 Core mask: 0x1 00:07:08.253 00:07:08.253 Accel Perf Configuration: 00:07:08.253 Workload Type: decompress 00:07:08.253 Transfer size: 4096 bytes 00:07:08.253 Vector count 1 00:07:08.253 Module: software 00:07:08.253 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:08.253 Queue depth: 32 00:07:08.253 Allocate depth: 32 00:07:08.253 # threads/core: 1 00:07:08.253 Run time: 1 seconds 00:07:08.253 Verify: Yes 00:07:08.253 00:07:08.253 Running for 1 seconds... 00:07:08.253 00:07:08.253 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:08.253 ------------------------------------------------------------------------------------ 00:07:08.253 0,0 85120/s 156 MiB/s 0 0 00:07:08.253 ==================================================================================== 00:07:08.253 Total 85120/s 332 MiB/s 0 0' 00:07:08.253 07:13:09 -- accel/accel.sh@20 -- # IFS=: 00:07:08.253 07:13:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:08.253 07:13:09 -- accel/accel.sh@20 -- # read -r var val 00:07:08.253 07:13:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:08.253 07:13:09 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.253 07:13:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.253 07:13:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.253 07:13:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.253 07:13:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.253 07:13:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.253 07:13:09 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.253 07:13:09 -- accel/accel.sh@42 -- # jq -r . 00:07:08.253 [2024-11-04 07:13:09.911900] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:08.253 [2024-11-04 07:13:09.911994] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71149 ] 00:07:08.253 [2024-11-04 07:13:10.049599] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.512 [2024-11-04 07:13:10.112295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.512 07:13:10 -- accel/accel.sh@21 -- # val= 00:07:08.512 07:13:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.512 07:13:10 -- accel/accel.sh@20 -- # IFS=: 00:07:08.512 07:13:10 -- accel/accel.sh@20 -- # read -r var val 00:07:08.512 07:13:10 -- accel/accel.sh@21 -- # val= 00:07:08.512 07:13:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.512 07:13:10 -- accel/accel.sh@20 -- # IFS=: 00:07:08.512 07:13:10 -- accel/accel.sh@20 -- # read -r var val 00:07:08.512 07:13:10 -- accel/accel.sh@21 -- # val= 00:07:08.512 07:13:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.512 07:13:10 -- accel/accel.sh@20 -- # IFS=: 00:07:08.512 07:13:10 -- accel/accel.sh@20 -- # read -r var val 00:07:08.512 07:13:10 -- accel/accel.sh@21 -- # val=0x1 00:07:08.512 07:13:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.512 07:13:10 -- accel/accel.sh@20 -- # IFS=: 00:07:08.512 07:13:10 -- accel/accel.sh@20 -- # read -r var val 00:07:08.512 07:13:10 -- accel/accel.sh@21 -- # val= 00:07:08.512 07:13:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.512 07:13:10 -- accel/accel.sh@20 -- # IFS=: 00:07:08.512 07:13:10 -- accel/accel.sh@20 -- # read -r var val 00:07:08.512 07:13:10 -- accel/accel.sh@21 -- # val= 00:07:08.512 07:13:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.512 07:13:10 -- accel/accel.sh@20 -- # IFS=: 00:07:08.512 07:13:10 -- accel/accel.sh@20 -- # read -r var val 00:07:08.512 07:13:10 -- accel/accel.sh@21 -- # val=decompress 00:07:08.512 07:13:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.512 07:13:10 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:08.512 07:13:10 -- accel/accel.sh@20 -- # IFS=: 00:07:08.512 07:13:10 -- accel/accel.sh@20 -- # read -r var val 00:07:08.512 07:13:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:08.512 07:13:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.512 07:13:10 -- accel/accel.sh@20 -- # IFS=: 00:07:08.512 07:13:10 -- accel/accel.sh@20 -- # read -r var val 00:07:08.512 07:13:10 -- accel/accel.sh@21 -- # val= 00:07:08.512 07:13:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.512 07:13:10 -- accel/accel.sh@20 -- # IFS=: 00:07:08.512 07:13:10 -- accel/accel.sh@20 -- # read -r var val 00:07:08.512 07:13:10 -- accel/accel.sh@21 -- # val=software 00:07:08.512 07:13:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.512 07:13:10 -- accel/accel.sh@23 -- # accel_module=software 00:07:08.512 07:13:10 -- accel/accel.sh@20 -- # IFS=: 00:07:08.512 07:13:10 -- accel/accel.sh@20 -- # read -r var val 00:07:08.512 07:13:10 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:08.512 07:13:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.512 07:13:10 -- accel/accel.sh@20 -- # IFS=: 00:07:08.512 07:13:10 -- accel/accel.sh@20 -- # read -r var val 00:07:08.512 07:13:10 -- accel/accel.sh@21 -- # val=32 00:07:08.512 07:13:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.512 07:13:10 -- accel/accel.sh@20 -- # IFS=: 00:07:08.512 07:13:10 -- accel/accel.sh@20 -- # read -r var val 00:07:08.512 07:13:10 -- accel/accel.sh@21 -- # val=32 00:07:08.512 07:13:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.512 07:13:10 -- accel/accel.sh@20 -- # IFS=: 00:07:08.512 07:13:10 -- accel/accel.sh@20 -- # read -r var val 00:07:08.512 07:13:10 -- accel/accel.sh@21 -- # val=1 00:07:08.512 07:13:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.512 07:13:10 -- accel/accel.sh@20 -- # IFS=: 00:07:08.512 07:13:10 -- accel/accel.sh@20 -- # read -r var val 00:07:08.512 07:13:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:08.512 07:13:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.512 07:13:10 -- accel/accel.sh@20 -- # IFS=: 00:07:08.512 07:13:10 -- accel/accel.sh@20 -- # read -r var val 00:07:08.512 07:13:10 -- accel/accel.sh@21 -- # val=Yes 00:07:08.512 07:13:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.512 07:13:10 -- accel/accel.sh@20 -- # IFS=: 00:07:08.512 07:13:10 -- accel/accel.sh@20 -- # read -r var val 00:07:08.512 07:13:10 -- accel/accel.sh@21 -- # val= 00:07:08.512 07:13:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.512 07:13:10 -- accel/accel.sh@20 -- # IFS=: 00:07:08.512 07:13:10 -- accel/accel.sh@20 -- # read -r var val 00:07:08.512 07:13:10 -- accel/accel.sh@21 -- # val= 00:07:08.512 07:13:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.512 07:13:10 -- accel/accel.sh@20 -- # IFS=: 00:07:08.512 07:13:10 -- accel/accel.sh@20 -- # read -r var val 00:07:09.888 07:13:11 -- accel/accel.sh@21 -- # val= 00:07:09.888 07:13:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.888 07:13:11 -- accel/accel.sh@20 -- # IFS=: 00:07:09.888 07:13:11 -- accel/accel.sh@20 -- # read -r var val 00:07:09.888 07:13:11 -- accel/accel.sh@21 -- # val= 00:07:09.888 07:13:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.888 07:13:11 -- accel/accel.sh@20 -- # IFS=: 00:07:09.888 07:13:11 -- accel/accel.sh@20 -- # read -r var val 00:07:09.888 07:13:11 -- accel/accel.sh@21 -- # val= 00:07:09.888 07:13:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.888 07:13:11 -- accel/accel.sh@20 -- # IFS=: 00:07:09.888 07:13:11 -- accel/accel.sh@20 -- # read -r var val 00:07:09.888 07:13:11 -- accel/accel.sh@21 -- # val= 00:07:09.888 07:13:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.888 07:13:11 -- accel/accel.sh@20 -- # IFS=: 00:07:09.888 07:13:11 -- accel/accel.sh@20 -- # read -r var val 00:07:09.888 07:13:11 -- accel/accel.sh@21 -- # val= 00:07:09.888 07:13:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.888 07:13:11 -- accel/accel.sh@20 -- # IFS=: 00:07:09.888 07:13:11 -- accel/accel.sh@20 -- # read -r var val 00:07:09.888 07:13:11 -- accel/accel.sh@21 -- # val= 00:07:09.888 ************************************ 00:07:09.888 END TEST accel_decomp 00:07:09.888 ************************************ 00:07:09.888 07:13:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.888 07:13:11 -- accel/accel.sh@20 -- # IFS=: 00:07:09.888 07:13:11 -- accel/accel.sh@20 -- # read -r var val 00:07:09.888 07:13:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:09.888 07:13:11 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:09.888 07:13:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.888 00:07:09.888 real 0m2.808s 00:07:09.888 user 0m2.388s 00:07:09.888 sys 0m0.215s 00:07:09.888 07:13:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.888 07:13:11 -- common/autotest_common.sh@10 -- # set +x 00:07:09.888 07:13:11 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:09.888 07:13:11 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:09.888 07:13:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:09.888 07:13:11 -- common/autotest_common.sh@10 -- # set +x 00:07:09.888 ************************************ 00:07:09.888 START TEST accel_decmop_full 00:07:09.888 ************************************ 00:07:09.888 07:13:11 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:09.888 07:13:11 -- accel/accel.sh@16 -- # local accel_opc 00:07:09.888 07:13:11 -- accel/accel.sh@17 -- # local accel_module 00:07:09.888 07:13:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:09.888 07:13:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:09.888 07:13:11 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.888 07:13:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.888 07:13:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.888 07:13:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.888 07:13:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.888 07:13:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.888 07:13:11 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.888 07:13:11 -- accel/accel.sh@42 -- # jq -r . 00:07:09.888 [2024-11-04 07:13:11.381725] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:09.888 [2024-11-04 07:13:11.381819] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71189 ] 00:07:09.888 [2024-11-04 07:13:11.508864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.888 [2024-11-04 07:13:11.559617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.266 07:13:12 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:11.266 00:07:11.266 SPDK Configuration: 00:07:11.266 Core mask: 0x1 00:07:11.266 00:07:11.266 Accel Perf Configuration: 00:07:11.266 Workload Type: decompress 00:07:11.266 Transfer size: 111250 bytes 00:07:11.266 Vector count 1 00:07:11.266 Module: software 00:07:11.266 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:11.266 Queue depth: 32 00:07:11.266 Allocate depth: 32 00:07:11.266 # threads/core: 1 00:07:11.266 Run time: 1 seconds 00:07:11.266 Verify: Yes 00:07:11.266 00:07:11.266 Running for 1 seconds... 00:07:11.266 00:07:11.266 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:11.266 ------------------------------------------------------------------------------------ 00:07:11.266 0,0 5632/s 232 MiB/s 0 0 00:07:11.266 ==================================================================================== 00:07:11.266 Total 5632/s 597 MiB/s 0 0' 00:07:11.266 07:13:12 -- accel/accel.sh@20 -- # IFS=: 00:07:11.266 07:13:12 -- accel/accel.sh@20 -- # read -r var val 00:07:11.266 07:13:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:11.266 07:13:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:11.266 07:13:12 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.266 07:13:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.266 07:13:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.267 07:13:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.267 07:13:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.267 07:13:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.267 07:13:12 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.267 07:13:12 -- accel/accel.sh@42 -- # jq -r . 00:07:11.267 [2024-11-04 07:13:12.771195] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:11.267 [2024-11-04 07:13:12.771297] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71205 ] 00:07:11.267 [2024-11-04 07:13:12.901725] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.267 [2024-11-04 07:13:12.954105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.267 07:13:13 -- accel/accel.sh@21 -- # val= 00:07:11.267 07:13:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.267 07:13:13 -- accel/accel.sh@20 -- # IFS=: 00:07:11.267 07:13:13 -- accel/accel.sh@20 -- # read -r var val 00:07:11.267 07:13:13 -- accel/accel.sh@21 -- # val= 00:07:11.267 07:13:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.267 07:13:13 -- accel/accel.sh@20 -- # IFS=: 00:07:11.267 07:13:13 -- accel/accel.sh@20 -- # read -r var val 00:07:11.267 07:13:13 -- accel/accel.sh@21 -- # val= 00:07:11.267 07:13:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.267 07:13:13 -- accel/accel.sh@20 -- # IFS=: 00:07:11.267 07:13:13 -- accel/accel.sh@20 -- # read -r var val 00:07:11.267 07:13:13 -- accel/accel.sh@21 -- # val=0x1 00:07:11.267 07:13:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.267 07:13:13 -- accel/accel.sh@20 -- # IFS=: 00:07:11.267 07:13:13 -- accel/accel.sh@20 -- # read -r var val 00:07:11.267 07:13:13 -- accel/accel.sh@21 -- # val= 00:07:11.267 07:13:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.267 07:13:13 -- accel/accel.sh@20 -- # IFS=: 00:07:11.267 07:13:13 -- accel/accel.sh@20 -- # read -r var val 00:07:11.267 07:13:13 -- accel/accel.sh@21 -- # val= 00:07:11.267 07:13:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.267 07:13:13 -- accel/accel.sh@20 -- # IFS=: 00:07:11.267 07:13:13 -- accel/accel.sh@20 -- # read -r var val 00:07:11.267 07:13:13 -- accel/accel.sh@21 -- # val=decompress 00:07:11.267 07:13:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.267 07:13:13 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:11.267 07:13:13 -- accel/accel.sh@20 -- # IFS=: 00:07:11.267 07:13:13 -- accel/accel.sh@20 -- # read -r var val 00:07:11.267 07:13:13 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:11.267 07:13:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.267 07:13:13 -- accel/accel.sh@20 -- # IFS=: 00:07:11.267 07:13:13 -- accel/accel.sh@20 -- # read -r var val 00:07:11.267 07:13:13 -- accel/accel.sh@21 -- # val= 00:07:11.267 07:13:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.267 07:13:13 -- accel/accel.sh@20 -- # IFS=: 00:07:11.267 07:13:13 -- accel/accel.sh@20 -- # read -r var val 00:07:11.267 07:13:13 -- accel/accel.sh@21 -- # val=software 00:07:11.267 07:13:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.267 07:13:13 -- accel/accel.sh@23 -- # accel_module=software 00:07:11.267 07:13:13 -- accel/accel.sh@20 -- # IFS=: 00:07:11.267 07:13:13 -- accel/accel.sh@20 -- # read -r var val 00:07:11.267 07:13:13 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:11.267 07:13:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.267 07:13:13 -- accel/accel.sh@20 -- # IFS=: 00:07:11.267 07:13:13 -- accel/accel.sh@20 -- # read -r var val 00:07:11.267 07:13:13 -- accel/accel.sh@21 -- # val=32 00:07:11.267 07:13:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.267 07:13:13 -- accel/accel.sh@20 -- # IFS=: 00:07:11.267 07:13:13 -- accel/accel.sh@20 -- # read -r var val 00:07:11.267 07:13:13 -- accel/accel.sh@21 -- # val=32 00:07:11.267 07:13:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.267 07:13:13 -- accel/accel.sh@20 -- # IFS=: 00:07:11.267 07:13:13 -- accel/accel.sh@20 -- # read -r var val 00:07:11.267 07:13:13 -- accel/accel.sh@21 -- # val=1 00:07:11.267 07:13:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.267 07:13:13 -- accel/accel.sh@20 -- # IFS=: 00:07:11.267 07:13:13 -- accel/accel.sh@20 -- # read -r var val 00:07:11.267 07:13:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:11.267 07:13:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.267 07:13:13 -- accel/accel.sh@20 -- # IFS=: 00:07:11.267 07:13:13 -- accel/accel.sh@20 -- # read -r var val 00:07:11.267 07:13:13 -- accel/accel.sh@21 -- # val=Yes 00:07:11.267 07:13:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.267 07:13:13 -- accel/accel.sh@20 -- # IFS=: 00:07:11.267 07:13:13 -- accel/accel.sh@20 -- # read -r var val 00:07:11.267 07:13:13 -- accel/accel.sh@21 -- # val= 00:07:11.267 07:13:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.267 07:13:13 -- accel/accel.sh@20 -- # IFS=: 00:07:11.267 07:13:13 -- accel/accel.sh@20 -- # read -r var val 00:07:11.267 07:13:13 -- accel/accel.sh@21 -- # val= 00:07:11.267 07:13:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.267 07:13:13 -- accel/accel.sh@20 -- # IFS=: 00:07:11.267 07:13:13 -- accel/accel.sh@20 -- # read -r var val 00:07:12.643 07:13:14 -- accel/accel.sh@21 -- # val= 00:07:12.643 07:13:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.643 07:13:14 -- accel/accel.sh@20 -- # IFS=: 00:07:12.643 07:13:14 -- accel/accel.sh@20 -- # read -r var val 00:07:12.643 07:13:14 -- accel/accel.sh@21 -- # val= 00:07:12.643 07:13:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.643 07:13:14 -- accel/accel.sh@20 -- # IFS=: 00:07:12.643 07:13:14 -- accel/accel.sh@20 -- # read -r var val 00:07:12.643 07:13:14 -- accel/accel.sh@21 -- # val= 00:07:12.643 07:13:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.643 07:13:14 -- accel/accel.sh@20 -- # IFS=: 00:07:12.643 07:13:14 -- accel/accel.sh@20 -- # read -r var val 00:07:12.643 07:13:14 -- accel/accel.sh@21 -- # val= 00:07:12.643 07:13:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.643 07:13:14 -- accel/accel.sh@20 -- # IFS=: 00:07:12.643 07:13:14 -- accel/accel.sh@20 -- # read -r var val 00:07:12.643 07:13:14 -- accel/accel.sh@21 -- # val= 00:07:12.643 07:13:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.643 07:13:14 -- accel/accel.sh@20 -- # IFS=: 00:07:12.643 07:13:14 -- accel/accel.sh@20 -- # read -r var val 00:07:12.643 07:13:14 -- accel/accel.sh@21 -- # val= 00:07:12.643 07:13:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.643 07:13:14 -- accel/accel.sh@20 -- # IFS=: 00:07:12.643 07:13:14 -- accel/accel.sh@20 -- # read -r var val 00:07:12.643 07:13:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:12.643 07:13:14 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:12.643 07:13:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.643 00:07:12.643 real 0m2.799s 00:07:12.643 user 0m2.382s 00:07:12.643 sys 0m0.215s 00:07:12.643 07:13:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.643 ************************************ 00:07:12.643 END TEST accel_decmop_full 00:07:12.643 ************************************ 00:07:12.643 07:13:14 -- common/autotest_common.sh@10 -- # set +x 00:07:12.643 07:13:14 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:12.643 07:13:14 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:12.643 07:13:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:12.643 07:13:14 -- common/autotest_common.sh@10 -- # set +x 00:07:12.643 ************************************ 00:07:12.643 START TEST accel_decomp_mcore 00:07:12.643 ************************************ 00:07:12.643 07:13:14 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:12.643 07:13:14 -- accel/accel.sh@16 -- # local accel_opc 00:07:12.643 07:13:14 -- accel/accel.sh@17 -- # local accel_module 00:07:12.643 07:13:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:12.643 07:13:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:12.643 07:13:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.643 07:13:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.643 07:13:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.643 07:13:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.643 07:13:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.643 07:13:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.643 07:13:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.643 07:13:14 -- accel/accel.sh@42 -- # jq -r . 00:07:12.643 [2024-11-04 07:13:14.232257] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:12.643 [2024-11-04 07:13:14.232347] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71240 ] 00:07:12.643 [2024-11-04 07:13:14.364342] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:12.643 [2024-11-04 07:13:14.420785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.643 [2024-11-04 07:13:14.420935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.643 [2024-11-04 07:13:14.421040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:12.643 [2024-11-04 07:13:14.421231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.031 07:13:15 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:14.031 00:07:14.031 SPDK Configuration: 00:07:14.031 Core mask: 0xf 00:07:14.031 00:07:14.031 Accel Perf Configuration: 00:07:14.032 Workload Type: decompress 00:07:14.032 Transfer size: 4096 bytes 00:07:14.032 Vector count 1 00:07:14.032 Module: software 00:07:14.032 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:14.032 Queue depth: 32 00:07:14.032 Allocate depth: 32 00:07:14.032 # threads/core: 1 00:07:14.032 Run time: 1 seconds 00:07:14.032 Verify: Yes 00:07:14.032 00:07:14.032 Running for 1 seconds... 00:07:14.032 00:07:14.032 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:14.032 ------------------------------------------------------------------------------------ 00:07:14.032 0,0 59968/s 110 MiB/s 0 0 00:07:14.032 3,0 54240/s 99 MiB/s 0 0 00:07:14.032 2,0 56448/s 104 MiB/s 0 0 00:07:14.032 1,0 56896/s 104 MiB/s 0 0 00:07:14.032 ==================================================================================== 00:07:14.032 Total 227552/s 888 MiB/s 0 0' 00:07:14.032 07:13:15 -- accel/accel.sh@20 -- # IFS=: 00:07:14.032 07:13:15 -- accel/accel.sh@20 -- # read -r var val 00:07:14.032 07:13:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:14.032 07:13:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:14.032 07:13:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.032 07:13:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.032 07:13:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.032 07:13:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.032 07:13:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.032 07:13:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.032 07:13:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.032 07:13:15 -- accel/accel.sh@42 -- # jq -r . 00:07:14.032 [2024-11-04 07:13:15.635388] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:14.032 [2024-11-04 07:13:15.635458] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71262 ] 00:07:14.032 [2024-11-04 07:13:15.764426] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:14.032 [2024-11-04 07:13:15.817681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.032 [2024-11-04 07:13:15.817841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.032 [2024-11-04 07:13:15.818821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.032 [2024-11-04 07:13:15.818784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:14.304 07:13:15 -- accel/accel.sh@21 -- # val= 00:07:14.304 07:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.304 07:13:15 -- accel/accel.sh@20 -- # IFS=: 00:07:14.304 07:13:15 -- accel/accel.sh@20 -- # read -r var val 00:07:14.304 07:13:15 -- accel/accel.sh@21 -- # val= 00:07:14.304 07:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.304 07:13:15 -- accel/accel.sh@20 -- # IFS=: 00:07:14.304 07:13:15 -- accel/accel.sh@20 -- # read -r var val 00:07:14.304 07:13:15 -- accel/accel.sh@21 -- # val= 00:07:14.304 07:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.304 07:13:15 -- accel/accel.sh@20 -- # IFS=: 00:07:14.304 07:13:15 -- accel/accel.sh@20 -- # read -r var val 00:07:14.304 07:13:15 -- accel/accel.sh@21 -- # val=0xf 00:07:14.304 07:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.304 07:13:15 -- accel/accel.sh@20 -- # IFS=: 00:07:14.304 07:13:15 -- accel/accel.sh@20 -- # read -r var val 00:07:14.304 07:13:15 -- accel/accel.sh@21 -- # val= 00:07:14.304 07:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.304 07:13:15 -- accel/accel.sh@20 -- # IFS=: 00:07:14.304 07:13:15 -- accel/accel.sh@20 -- # read -r var val 00:07:14.304 07:13:15 -- accel/accel.sh@21 -- # val= 00:07:14.304 07:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.304 07:13:15 -- accel/accel.sh@20 -- # IFS=: 00:07:14.304 07:13:15 -- accel/accel.sh@20 -- # read -r var val 00:07:14.304 07:13:15 -- accel/accel.sh@21 -- # val=decompress 00:07:14.304 07:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.304 07:13:15 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:14.304 07:13:15 -- accel/accel.sh@20 -- # IFS=: 00:07:14.304 07:13:15 -- accel/accel.sh@20 -- # read -r var val 00:07:14.304 07:13:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:14.304 07:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.304 07:13:15 -- accel/accel.sh@20 -- # IFS=: 00:07:14.304 07:13:15 -- accel/accel.sh@20 -- # read -r var val 00:07:14.304 07:13:15 -- accel/accel.sh@21 -- # val= 00:07:14.304 07:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.304 07:13:15 -- accel/accel.sh@20 -- # IFS=: 00:07:14.304 07:13:15 -- accel/accel.sh@20 -- # read -r var val 00:07:14.304 07:13:15 -- accel/accel.sh@21 -- # val=software 00:07:14.304 07:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.304 07:13:15 -- accel/accel.sh@23 -- # accel_module=software 00:07:14.304 07:13:15 -- accel/accel.sh@20 -- # IFS=: 00:07:14.304 07:13:15 -- accel/accel.sh@20 -- # read -r var val 00:07:14.304 07:13:15 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:14.304 07:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.304 07:13:15 -- accel/accel.sh@20 -- # IFS=: 00:07:14.304 07:13:15 -- accel/accel.sh@20 -- # read -r var val 00:07:14.305 07:13:15 -- accel/accel.sh@21 -- # val=32 00:07:14.305 07:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.305 07:13:15 -- accel/accel.sh@20 -- # IFS=: 00:07:14.305 07:13:15 -- accel/accel.sh@20 -- # read -r var val 00:07:14.305 07:13:15 -- accel/accel.sh@21 -- # val=32 00:07:14.305 07:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.305 07:13:15 -- accel/accel.sh@20 -- # IFS=: 00:07:14.305 07:13:15 -- accel/accel.sh@20 -- # read -r var val 00:07:14.305 07:13:15 -- accel/accel.sh@21 -- # val=1 00:07:14.305 07:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.305 07:13:15 -- accel/accel.sh@20 -- # IFS=: 00:07:14.305 07:13:15 -- accel/accel.sh@20 -- # read -r var val 00:07:14.305 07:13:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:14.305 07:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.305 07:13:15 -- accel/accel.sh@20 -- # IFS=: 00:07:14.305 07:13:15 -- accel/accel.sh@20 -- # read -r var val 00:07:14.305 07:13:15 -- accel/accel.sh@21 -- # val=Yes 00:07:14.305 07:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.305 07:13:15 -- accel/accel.sh@20 -- # IFS=: 00:07:14.305 07:13:15 -- accel/accel.sh@20 -- # read -r var val 00:07:14.305 07:13:15 -- accel/accel.sh@21 -- # val= 00:07:14.305 07:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.305 07:13:15 -- accel/accel.sh@20 -- # IFS=: 00:07:14.305 07:13:15 -- accel/accel.sh@20 -- # read -r var val 00:07:14.305 07:13:15 -- accel/accel.sh@21 -- # val= 00:07:14.305 07:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.305 07:13:15 -- accel/accel.sh@20 -- # IFS=: 00:07:14.305 07:13:15 -- accel/accel.sh@20 -- # read -r var val 00:07:15.240 07:13:17 -- accel/accel.sh@21 -- # val= 00:07:15.240 07:13:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.240 07:13:17 -- accel/accel.sh@20 -- # IFS=: 00:07:15.240 07:13:17 -- accel/accel.sh@20 -- # read -r var val 00:07:15.240 07:13:17 -- accel/accel.sh@21 -- # val= 00:07:15.240 07:13:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.240 07:13:17 -- accel/accel.sh@20 -- # IFS=: 00:07:15.240 07:13:17 -- accel/accel.sh@20 -- # read -r var val 00:07:15.240 07:13:17 -- accel/accel.sh@21 -- # val= 00:07:15.240 07:13:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.240 07:13:17 -- accel/accel.sh@20 -- # IFS=: 00:07:15.240 07:13:17 -- accel/accel.sh@20 -- # read -r var val 00:07:15.240 07:13:17 -- accel/accel.sh@21 -- # val= 00:07:15.240 07:13:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.240 07:13:17 -- accel/accel.sh@20 -- # IFS=: 00:07:15.240 07:13:17 -- accel/accel.sh@20 -- # read -r var val 00:07:15.240 07:13:17 -- accel/accel.sh@21 -- # val= 00:07:15.240 07:13:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.240 07:13:17 -- accel/accel.sh@20 -- # IFS=: 00:07:15.240 07:13:17 -- accel/accel.sh@20 -- # read -r var val 00:07:15.240 07:13:17 -- accel/accel.sh@21 -- # val= 00:07:15.240 07:13:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.240 07:13:17 -- accel/accel.sh@20 -- # IFS=: 00:07:15.240 07:13:17 -- accel/accel.sh@20 -- # read -r var val 00:07:15.240 07:13:17 -- accel/accel.sh@21 -- # val= 00:07:15.240 07:13:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.240 07:13:17 -- accel/accel.sh@20 -- # IFS=: 00:07:15.240 07:13:17 -- accel/accel.sh@20 -- # read -r var val 00:07:15.240 07:13:17 -- accel/accel.sh@21 -- # val= 00:07:15.240 07:13:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.240 07:13:17 -- accel/accel.sh@20 -- # IFS=: 00:07:15.240 07:13:17 -- accel/accel.sh@20 -- # read -r var val 00:07:15.240 07:13:17 -- accel/accel.sh@21 -- # val= 00:07:15.240 07:13:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.240 07:13:17 -- accel/accel.sh@20 -- # IFS=: 00:07:15.240 07:13:17 -- accel/accel.sh@20 -- # read -r var val 00:07:15.240 ************************************ 00:07:15.240 END TEST accel_decomp_mcore 00:07:15.240 ************************************ 00:07:15.240 07:13:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:15.240 07:13:17 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:15.240 07:13:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.240 00:07:15.240 real 0m2.808s 00:07:15.240 user 0m9.154s 00:07:15.240 sys 0m0.238s 00:07:15.240 07:13:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.240 07:13:17 -- common/autotest_common.sh@10 -- # set +x 00:07:15.240 07:13:17 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:15.240 07:13:17 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:15.240 07:13:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:15.240 07:13:17 -- common/autotest_common.sh@10 -- # set +x 00:07:15.240 ************************************ 00:07:15.240 START TEST accel_decomp_full_mcore 00:07:15.240 ************************************ 00:07:15.240 07:13:17 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:15.240 07:13:17 -- accel/accel.sh@16 -- # local accel_opc 00:07:15.240 07:13:17 -- accel/accel.sh@17 -- # local accel_module 00:07:15.240 07:13:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:15.240 07:13:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:15.240 07:13:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.240 07:13:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.240 07:13:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.240 07:13:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.240 07:13:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.498 07:13:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.498 07:13:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.498 07:13:17 -- accel/accel.sh@42 -- # jq -r . 00:07:15.498 [2024-11-04 07:13:17.100176] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:15.499 [2024-11-04 07:13:17.100437] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71294 ] 00:07:15.499 [2024-11-04 07:13:17.235572] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:15.499 [2024-11-04 07:13:17.291169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.499 [2024-11-04 07:13:17.291321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.499 [2024-11-04 07:13:17.291448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:15.499 [2024-11-04 07:13:17.291689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.873 07:13:18 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:16.873 00:07:16.873 SPDK Configuration: 00:07:16.873 Core mask: 0xf 00:07:16.873 00:07:16.873 Accel Perf Configuration: 00:07:16.873 Workload Type: decompress 00:07:16.873 Transfer size: 111250 bytes 00:07:16.874 Vector count 1 00:07:16.874 Module: software 00:07:16.874 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:16.874 Queue depth: 32 00:07:16.874 Allocate depth: 32 00:07:16.874 # threads/core: 1 00:07:16.874 Run time: 1 seconds 00:07:16.874 Verify: Yes 00:07:16.874 00:07:16.874 Running for 1 seconds... 00:07:16.874 00:07:16.874 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:16.874 ------------------------------------------------------------------------------------ 00:07:16.874 0,0 5344/s 220 MiB/s 0 0 00:07:16.874 3,0 5280/s 218 MiB/s 0 0 00:07:16.874 2,0 5600/s 231 MiB/s 0 0 00:07:16.874 1,0 5376/s 222 MiB/s 0 0 00:07:16.874 ==================================================================================== 00:07:16.874 Total 21600/s 2291 MiB/s 0 0' 00:07:16.874 07:13:18 -- accel/accel.sh@20 -- # IFS=: 00:07:16.874 07:13:18 -- accel/accel.sh@20 -- # read -r var val 00:07:16.874 07:13:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:16.874 07:13:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:16.874 07:13:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.874 07:13:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.874 07:13:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.874 07:13:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.874 07:13:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.874 07:13:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.874 07:13:18 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.874 07:13:18 -- accel/accel.sh@42 -- # jq -r . 00:07:16.874 [2024-11-04 07:13:18.521778] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:16.874 [2024-11-04 07:13:18.522508] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71317 ] 00:07:16.874 [2024-11-04 07:13:18.659836] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:17.132 [2024-11-04 07:13:18.718911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.132 [2024-11-04 07:13:18.719069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.132 [2024-11-04 07:13:18.719188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:17.132 [2024-11-04 07:13:18.719480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.132 07:13:18 -- accel/accel.sh@21 -- # val= 00:07:17.132 07:13:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.132 07:13:18 -- accel/accel.sh@20 -- # IFS=: 00:07:17.132 07:13:18 -- accel/accel.sh@20 -- # read -r var val 00:07:17.132 07:13:18 -- accel/accel.sh@21 -- # val= 00:07:17.132 07:13:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.132 07:13:18 -- accel/accel.sh@20 -- # IFS=: 00:07:17.133 07:13:18 -- accel/accel.sh@20 -- # read -r var val 00:07:17.133 07:13:18 -- accel/accel.sh@21 -- # val= 00:07:17.133 07:13:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.133 07:13:18 -- accel/accel.sh@20 -- # IFS=: 00:07:17.133 07:13:18 -- accel/accel.sh@20 -- # read -r var val 00:07:17.133 07:13:18 -- accel/accel.sh@21 -- # val=0xf 00:07:17.133 07:13:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.133 07:13:18 -- accel/accel.sh@20 -- # IFS=: 00:07:17.133 07:13:18 -- accel/accel.sh@20 -- # read -r var val 00:07:17.133 07:13:18 -- accel/accel.sh@21 -- # val= 00:07:17.133 07:13:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.133 07:13:18 -- accel/accel.sh@20 -- # IFS=: 00:07:17.133 07:13:18 -- accel/accel.sh@20 -- # read -r var val 00:07:17.133 07:13:18 -- accel/accel.sh@21 -- # val= 00:07:17.133 07:13:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.133 07:13:18 -- accel/accel.sh@20 -- # IFS=: 00:07:17.133 07:13:18 -- accel/accel.sh@20 -- # read -r var val 00:07:17.133 07:13:18 -- accel/accel.sh@21 -- # val=decompress 00:07:17.133 07:13:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.133 07:13:18 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:17.133 07:13:18 -- accel/accel.sh@20 -- # IFS=: 00:07:17.133 07:13:18 -- accel/accel.sh@20 -- # read -r var val 00:07:17.133 07:13:18 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:17.133 07:13:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.133 07:13:18 -- accel/accel.sh@20 -- # IFS=: 00:07:17.133 07:13:18 -- accel/accel.sh@20 -- # read -r var val 00:07:17.133 07:13:18 -- accel/accel.sh@21 -- # val= 00:07:17.133 07:13:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.133 07:13:18 -- accel/accel.sh@20 -- # IFS=: 00:07:17.133 07:13:18 -- accel/accel.sh@20 -- # read -r var val 00:07:17.133 07:13:18 -- accel/accel.sh@21 -- # val=software 00:07:17.133 07:13:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.133 07:13:18 -- accel/accel.sh@23 -- # accel_module=software 00:07:17.133 07:13:18 -- accel/accel.sh@20 -- # IFS=: 00:07:17.133 07:13:18 -- accel/accel.sh@20 -- # read -r var val 00:07:17.133 07:13:18 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:17.133 07:13:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.133 07:13:18 -- accel/accel.sh@20 -- # IFS=: 00:07:17.133 07:13:18 -- accel/accel.sh@20 -- # read -r var val 00:07:17.133 07:13:18 -- accel/accel.sh@21 -- # val=32 00:07:17.133 07:13:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.133 07:13:18 -- accel/accel.sh@20 -- # IFS=: 00:07:17.133 07:13:18 -- accel/accel.sh@20 -- # read -r var val 00:07:17.133 07:13:18 -- accel/accel.sh@21 -- # val=32 00:07:17.133 07:13:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.133 07:13:18 -- accel/accel.sh@20 -- # IFS=: 00:07:17.133 07:13:18 -- accel/accel.sh@20 -- # read -r var val 00:07:17.133 07:13:18 -- accel/accel.sh@21 -- # val=1 00:07:17.133 07:13:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.133 07:13:18 -- accel/accel.sh@20 -- # IFS=: 00:07:17.133 07:13:18 -- accel/accel.sh@20 -- # read -r var val 00:07:17.133 07:13:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:17.133 07:13:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.133 07:13:18 -- accel/accel.sh@20 -- # IFS=: 00:07:17.133 07:13:18 -- accel/accel.sh@20 -- # read -r var val 00:07:17.133 07:13:18 -- accel/accel.sh@21 -- # val=Yes 00:07:17.133 07:13:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.133 07:13:18 -- accel/accel.sh@20 -- # IFS=: 00:07:17.133 07:13:18 -- accel/accel.sh@20 -- # read -r var val 00:07:17.133 07:13:18 -- accel/accel.sh@21 -- # val= 00:07:17.133 07:13:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.133 07:13:18 -- accel/accel.sh@20 -- # IFS=: 00:07:17.133 07:13:18 -- accel/accel.sh@20 -- # read -r var val 00:07:17.133 07:13:18 -- accel/accel.sh@21 -- # val= 00:07:17.133 07:13:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.133 07:13:18 -- accel/accel.sh@20 -- # IFS=: 00:07:17.133 07:13:18 -- accel/accel.sh@20 -- # read -r var val 00:07:18.506 07:13:19 -- accel/accel.sh@21 -- # val= 00:07:18.506 07:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.506 07:13:19 -- accel/accel.sh@20 -- # IFS=: 00:07:18.506 07:13:19 -- accel/accel.sh@20 -- # read -r var val 00:07:18.506 07:13:19 -- accel/accel.sh@21 -- # val= 00:07:18.506 07:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.506 07:13:19 -- accel/accel.sh@20 -- # IFS=: 00:07:18.506 07:13:19 -- accel/accel.sh@20 -- # read -r var val 00:07:18.506 07:13:19 -- accel/accel.sh@21 -- # val= 00:07:18.506 07:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.506 07:13:19 -- accel/accel.sh@20 -- # IFS=: 00:07:18.506 07:13:19 -- accel/accel.sh@20 -- # read -r var val 00:07:18.506 07:13:19 -- accel/accel.sh@21 -- # val= 00:07:18.506 07:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.506 07:13:19 -- accel/accel.sh@20 -- # IFS=: 00:07:18.506 07:13:19 -- accel/accel.sh@20 -- # read -r var val 00:07:18.506 07:13:19 -- accel/accel.sh@21 -- # val= 00:07:18.506 07:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.506 07:13:19 -- accel/accel.sh@20 -- # IFS=: 00:07:18.506 07:13:19 -- accel/accel.sh@20 -- # read -r var val 00:07:18.506 07:13:19 -- accel/accel.sh@21 -- # val= 00:07:18.506 07:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.506 07:13:19 -- accel/accel.sh@20 -- # IFS=: 00:07:18.506 07:13:19 -- accel/accel.sh@20 -- # read -r var val 00:07:18.506 07:13:19 -- accel/accel.sh@21 -- # val= 00:07:18.506 07:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.507 07:13:19 -- accel/accel.sh@20 -- # IFS=: 00:07:18.507 07:13:19 -- accel/accel.sh@20 -- # read -r var val 00:07:18.507 07:13:19 -- accel/accel.sh@21 -- # val= 00:07:18.507 07:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.507 07:13:19 -- accel/accel.sh@20 -- # IFS=: 00:07:18.507 07:13:19 -- accel/accel.sh@20 -- # read -r var val 00:07:18.507 07:13:19 -- accel/accel.sh@21 -- # val= 00:07:18.507 07:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.507 07:13:19 -- accel/accel.sh@20 -- # IFS=: 00:07:18.507 07:13:19 -- accel/accel.sh@20 -- # read -r var val 00:07:18.507 07:13:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:18.507 07:13:19 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:18.507 07:13:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.507 00:07:18.507 real 0m2.865s 00:07:18.507 user 0m9.251s 00:07:18.507 sys 0m0.248s 00:07:18.507 07:13:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.507 ************************************ 00:07:18.507 END TEST accel_decomp_full_mcore 00:07:18.507 ************************************ 00:07:18.507 07:13:19 -- common/autotest_common.sh@10 -- # set +x 00:07:18.507 07:13:19 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:18.507 07:13:19 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:18.507 07:13:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:18.507 07:13:19 -- common/autotest_common.sh@10 -- # set +x 00:07:18.507 ************************************ 00:07:18.507 START TEST accel_decomp_mthread 00:07:18.507 ************************************ 00:07:18.507 07:13:19 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:18.507 07:13:19 -- accel/accel.sh@16 -- # local accel_opc 00:07:18.507 07:13:19 -- accel/accel.sh@17 -- # local accel_module 00:07:18.507 07:13:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:18.507 07:13:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:18.507 07:13:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.507 07:13:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.507 07:13:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.507 07:13:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.507 07:13:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.507 07:13:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.507 07:13:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.507 07:13:19 -- accel/accel.sh@42 -- # jq -r . 00:07:18.507 [2024-11-04 07:13:20.014021] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:18.507 [2024-11-04 07:13:20.014267] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71354 ] 00:07:18.507 [2024-11-04 07:13:20.143384] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.507 [2024-11-04 07:13:20.204786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.883 07:13:21 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:19.883 00:07:19.883 SPDK Configuration: 00:07:19.883 Core mask: 0x1 00:07:19.883 00:07:19.884 Accel Perf Configuration: 00:07:19.884 Workload Type: decompress 00:07:19.884 Transfer size: 4096 bytes 00:07:19.884 Vector count 1 00:07:19.884 Module: software 00:07:19.884 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:19.884 Queue depth: 32 00:07:19.884 Allocate depth: 32 00:07:19.884 # threads/core: 2 00:07:19.884 Run time: 1 seconds 00:07:19.884 Verify: Yes 00:07:19.884 00:07:19.884 Running for 1 seconds... 00:07:19.884 00:07:19.884 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:19.884 ------------------------------------------------------------------------------------ 00:07:19.884 0,1 42784/s 78 MiB/s 0 0 00:07:19.884 0,0 42656/s 78 MiB/s 0 0 00:07:19.884 ==================================================================================== 00:07:19.884 Total 85440/s 333 MiB/s 0 0' 00:07:19.884 07:13:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.884 07:13:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:19.884 07:13:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.884 07:13:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:19.884 07:13:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.884 07:13:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.884 07:13:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.884 07:13:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.884 07:13:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.884 07:13:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.884 07:13:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.884 07:13:21 -- accel/accel.sh@42 -- # jq -r . 00:07:19.884 [2024-11-04 07:13:21.417306] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:19.884 [2024-11-04 07:13:21.417402] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71374 ] 00:07:19.884 [2024-11-04 07:13:21.545511] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.884 [2024-11-04 07:13:21.593222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.884 07:13:21 -- accel/accel.sh@21 -- # val= 00:07:19.884 07:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.884 07:13:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.884 07:13:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.884 07:13:21 -- accel/accel.sh@21 -- # val= 00:07:19.884 07:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.884 07:13:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.884 07:13:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.884 07:13:21 -- accel/accel.sh@21 -- # val= 00:07:19.884 07:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.884 07:13:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.884 07:13:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.884 07:13:21 -- accel/accel.sh@21 -- # val=0x1 00:07:19.884 07:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.884 07:13:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.884 07:13:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.884 07:13:21 -- accel/accel.sh@21 -- # val= 00:07:19.884 07:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.884 07:13:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.884 07:13:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.884 07:13:21 -- accel/accel.sh@21 -- # val= 00:07:19.884 07:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.884 07:13:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.884 07:13:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.884 07:13:21 -- accel/accel.sh@21 -- # val=decompress 00:07:19.884 07:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.884 07:13:21 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:19.884 07:13:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.884 07:13:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.884 07:13:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:19.884 07:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.884 07:13:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.884 07:13:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.884 07:13:21 -- accel/accel.sh@21 -- # val= 00:07:19.884 07:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.884 07:13:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.884 07:13:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.884 07:13:21 -- accel/accel.sh@21 -- # val=software 00:07:19.884 07:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.884 07:13:21 -- accel/accel.sh@23 -- # accel_module=software 00:07:19.884 07:13:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.884 07:13:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.884 07:13:21 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:19.884 07:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.884 07:13:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.884 07:13:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.884 07:13:21 -- accel/accel.sh@21 -- # val=32 00:07:19.884 07:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.884 07:13:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.884 07:13:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.884 07:13:21 -- accel/accel.sh@21 -- # val=32 00:07:19.884 07:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.884 07:13:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.884 07:13:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.884 07:13:21 -- accel/accel.sh@21 -- # val=2 00:07:19.884 07:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.884 07:13:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.884 07:13:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.884 07:13:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:19.884 07:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.884 07:13:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.884 07:13:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.884 07:13:21 -- accel/accel.sh@21 -- # val=Yes 00:07:19.884 07:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.884 07:13:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.884 07:13:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.884 07:13:21 -- accel/accel.sh@21 -- # val= 00:07:19.884 07:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.884 07:13:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.884 07:13:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.884 07:13:21 -- accel/accel.sh@21 -- # val= 00:07:19.884 07:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.884 07:13:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.884 07:13:21 -- accel/accel.sh@20 -- # read -r var val 00:07:21.261 07:13:22 -- accel/accel.sh@21 -- # val= 00:07:21.261 07:13:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.261 07:13:22 -- accel/accel.sh@20 -- # IFS=: 00:07:21.261 07:13:22 -- accel/accel.sh@20 -- # read -r var val 00:07:21.261 07:13:22 -- accel/accel.sh@21 -- # val= 00:07:21.261 07:13:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.261 07:13:22 -- accel/accel.sh@20 -- # IFS=: 00:07:21.261 07:13:22 -- accel/accel.sh@20 -- # read -r var val 00:07:21.261 07:13:22 -- accel/accel.sh@21 -- # val= 00:07:21.261 07:13:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.261 07:13:22 -- accel/accel.sh@20 -- # IFS=: 00:07:21.261 07:13:22 -- accel/accel.sh@20 -- # read -r var val 00:07:21.261 07:13:22 -- accel/accel.sh@21 -- # val= 00:07:21.261 07:13:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.261 07:13:22 -- accel/accel.sh@20 -- # IFS=: 00:07:21.261 07:13:22 -- accel/accel.sh@20 -- # read -r var val 00:07:21.261 07:13:22 -- accel/accel.sh@21 -- # val= 00:07:21.261 07:13:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.261 07:13:22 -- accel/accel.sh@20 -- # IFS=: 00:07:21.261 07:13:22 -- accel/accel.sh@20 -- # read -r var val 00:07:21.261 07:13:22 -- accel/accel.sh@21 -- # val= 00:07:21.261 07:13:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.261 07:13:22 -- accel/accel.sh@20 -- # IFS=: 00:07:21.261 07:13:22 -- accel/accel.sh@20 -- # read -r var val 00:07:21.261 07:13:22 -- accel/accel.sh@21 -- # val= 00:07:21.261 07:13:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.261 07:13:22 -- accel/accel.sh@20 -- # IFS=: 00:07:21.261 07:13:22 -- accel/accel.sh@20 -- # read -r var val 00:07:21.261 07:13:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:21.261 07:13:22 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:21.261 07:13:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.261 00:07:21.261 real 0m2.796s 00:07:21.261 user 0m2.378s 00:07:21.261 sys 0m0.218s 00:07:21.261 07:13:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.261 07:13:22 -- common/autotest_common.sh@10 -- # set +x 00:07:21.261 ************************************ 00:07:21.261 END TEST accel_decomp_mthread 00:07:21.261 ************************************ 00:07:21.261 07:13:22 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:21.261 07:13:22 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:21.261 07:13:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:21.261 07:13:22 -- common/autotest_common.sh@10 -- # set +x 00:07:21.261 ************************************ 00:07:21.261 START TEST accel_deomp_full_mthread 00:07:21.261 ************************************ 00:07:21.261 07:13:22 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:21.261 07:13:22 -- accel/accel.sh@16 -- # local accel_opc 00:07:21.261 07:13:22 -- accel/accel.sh@17 -- # local accel_module 00:07:21.261 07:13:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:21.261 07:13:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:21.261 07:13:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.261 07:13:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.261 07:13:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.262 07:13:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.262 07:13:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.262 07:13:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.262 07:13:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.262 07:13:22 -- accel/accel.sh@42 -- # jq -r . 00:07:21.262 [2024-11-04 07:13:22.873187] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:21.262 [2024-11-04 07:13:22.873304] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71408 ] 00:07:21.262 [2024-11-04 07:13:23.011140] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.262 [2024-11-04 07:13:23.062980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.638 07:13:24 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:22.638 00:07:22.638 SPDK Configuration: 00:07:22.638 Core mask: 0x1 00:07:22.638 00:07:22.638 Accel Perf Configuration: 00:07:22.638 Workload Type: decompress 00:07:22.638 Transfer size: 111250 bytes 00:07:22.638 Vector count 1 00:07:22.638 Module: software 00:07:22.638 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:22.638 Queue depth: 32 00:07:22.638 Allocate depth: 32 00:07:22.638 # threads/core: 2 00:07:22.638 Run time: 1 seconds 00:07:22.638 Verify: Yes 00:07:22.638 00:07:22.638 Running for 1 seconds... 00:07:22.638 00:07:22.638 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:22.638 ------------------------------------------------------------------------------------ 00:07:22.638 0,1 2880/s 118 MiB/s 0 0 00:07:22.638 0,0 2848/s 117 MiB/s 0 0 00:07:22.638 ==================================================================================== 00:07:22.638 Total 5728/s 607 MiB/s 0 0' 00:07:22.638 07:13:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:22.638 07:13:24 -- accel/accel.sh@20 -- # IFS=: 00:07:22.638 07:13:24 -- accel/accel.sh@20 -- # read -r var val 00:07:22.638 07:13:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:22.638 07:13:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.638 07:13:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.638 07:13:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.638 07:13:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.638 07:13:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.638 07:13:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.638 07:13:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.638 07:13:24 -- accel/accel.sh@42 -- # jq -r . 00:07:22.638 [2024-11-04 07:13:24.293973] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:22.638 [2024-11-04 07:13:24.294060] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71428 ] 00:07:22.638 [2024-11-04 07:13:24.431474] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.897 [2024-11-04 07:13:24.487468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.897 07:13:24 -- accel/accel.sh@21 -- # val= 00:07:22.897 07:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.897 07:13:24 -- accel/accel.sh@20 -- # IFS=: 00:07:22.897 07:13:24 -- accel/accel.sh@20 -- # read -r var val 00:07:22.898 07:13:24 -- accel/accel.sh@21 -- # val= 00:07:22.898 07:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.898 07:13:24 -- accel/accel.sh@20 -- # IFS=: 00:07:22.898 07:13:24 -- accel/accel.sh@20 -- # read -r var val 00:07:22.898 07:13:24 -- accel/accel.sh@21 -- # val= 00:07:22.898 07:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.898 07:13:24 -- accel/accel.sh@20 -- # IFS=: 00:07:22.898 07:13:24 -- accel/accel.sh@20 -- # read -r var val 00:07:22.898 07:13:24 -- accel/accel.sh@21 -- # val=0x1 00:07:22.898 07:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.898 07:13:24 -- accel/accel.sh@20 -- # IFS=: 00:07:22.898 07:13:24 -- accel/accel.sh@20 -- # read -r var val 00:07:22.898 07:13:24 -- accel/accel.sh@21 -- # val= 00:07:22.898 07:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.898 07:13:24 -- accel/accel.sh@20 -- # IFS=: 00:07:22.898 07:13:24 -- accel/accel.sh@20 -- # read -r var val 00:07:22.898 07:13:24 -- accel/accel.sh@21 -- # val= 00:07:22.898 07:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.898 07:13:24 -- accel/accel.sh@20 -- # IFS=: 00:07:22.898 07:13:24 -- accel/accel.sh@20 -- # read -r var val 00:07:22.898 07:13:24 -- accel/accel.sh@21 -- # val=decompress 00:07:22.898 07:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.898 07:13:24 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:22.898 07:13:24 -- accel/accel.sh@20 -- # IFS=: 00:07:22.898 07:13:24 -- accel/accel.sh@20 -- # read -r var val 00:07:22.898 07:13:24 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:22.898 07:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.898 07:13:24 -- accel/accel.sh@20 -- # IFS=: 00:07:22.898 07:13:24 -- accel/accel.sh@20 -- # read -r var val 00:07:22.898 07:13:24 -- accel/accel.sh@21 -- # val= 00:07:22.898 07:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.898 07:13:24 -- accel/accel.sh@20 -- # IFS=: 00:07:22.898 07:13:24 -- accel/accel.sh@20 -- # read -r var val 00:07:22.898 07:13:24 -- accel/accel.sh@21 -- # val=software 00:07:22.898 07:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.898 07:13:24 -- accel/accel.sh@23 -- # accel_module=software 00:07:22.898 07:13:24 -- accel/accel.sh@20 -- # IFS=: 00:07:22.898 07:13:24 -- accel/accel.sh@20 -- # read -r var val 00:07:22.898 07:13:24 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:22.898 07:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.898 07:13:24 -- accel/accel.sh@20 -- # IFS=: 00:07:22.898 07:13:24 -- accel/accel.sh@20 -- # read -r var val 00:07:22.898 07:13:24 -- accel/accel.sh@21 -- # val=32 00:07:22.898 07:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.898 07:13:24 -- accel/accel.sh@20 -- # IFS=: 00:07:22.898 07:13:24 -- accel/accel.sh@20 -- # read -r var val 00:07:22.898 07:13:24 -- accel/accel.sh@21 -- # val=32 00:07:22.898 07:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.898 07:13:24 -- accel/accel.sh@20 -- # IFS=: 00:07:22.898 07:13:24 -- accel/accel.sh@20 -- # read -r var val 00:07:22.898 07:13:24 -- accel/accel.sh@21 -- # val=2 00:07:22.898 07:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.898 07:13:24 -- accel/accel.sh@20 -- # IFS=: 00:07:22.898 07:13:24 -- accel/accel.sh@20 -- # read -r var val 00:07:22.898 07:13:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:22.898 07:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.898 07:13:24 -- accel/accel.sh@20 -- # IFS=: 00:07:22.898 07:13:24 -- accel/accel.sh@20 -- # read -r var val 00:07:22.898 07:13:24 -- accel/accel.sh@21 -- # val=Yes 00:07:22.898 07:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.898 07:13:24 -- accel/accel.sh@20 -- # IFS=: 00:07:22.898 07:13:24 -- accel/accel.sh@20 -- # read -r var val 00:07:22.898 07:13:24 -- accel/accel.sh@21 -- # val= 00:07:22.898 07:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.898 07:13:24 -- accel/accel.sh@20 -- # IFS=: 00:07:22.898 07:13:24 -- accel/accel.sh@20 -- # read -r var val 00:07:22.898 07:13:24 -- accel/accel.sh@21 -- # val= 00:07:22.898 07:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.898 07:13:24 -- accel/accel.sh@20 -- # IFS=: 00:07:22.898 07:13:24 -- accel/accel.sh@20 -- # read -r var val 00:07:24.275 07:13:25 -- accel/accel.sh@21 -- # val= 00:07:24.275 07:13:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.275 07:13:25 -- accel/accel.sh@20 -- # IFS=: 00:07:24.275 07:13:25 -- accel/accel.sh@20 -- # read -r var val 00:07:24.275 07:13:25 -- accel/accel.sh@21 -- # val= 00:07:24.275 07:13:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.275 07:13:25 -- accel/accel.sh@20 -- # IFS=: 00:07:24.275 07:13:25 -- accel/accel.sh@20 -- # read -r var val 00:07:24.275 07:13:25 -- accel/accel.sh@21 -- # val= 00:07:24.275 07:13:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.275 07:13:25 -- accel/accel.sh@20 -- # IFS=: 00:07:24.275 07:13:25 -- accel/accel.sh@20 -- # read -r var val 00:07:24.275 07:13:25 -- accel/accel.sh@21 -- # val= 00:07:24.275 07:13:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.275 07:13:25 -- accel/accel.sh@20 -- # IFS=: 00:07:24.275 07:13:25 -- accel/accel.sh@20 -- # read -r var val 00:07:24.275 07:13:25 -- accel/accel.sh@21 -- # val= 00:07:24.275 07:13:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.275 07:13:25 -- accel/accel.sh@20 -- # IFS=: 00:07:24.275 07:13:25 -- accel/accel.sh@20 -- # read -r var val 00:07:24.275 07:13:25 -- accel/accel.sh@21 -- # val= 00:07:24.275 07:13:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.275 07:13:25 -- accel/accel.sh@20 -- # IFS=: 00:07:24.275 07:13:25 -- accel/accel.sh@20 -- # read -r var val 00:07:24.275 07:13:25 -- accel/accel.sh@21 -- # val= 00:07:24.275 07:13:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.275 07:13:25 -- accel/accel.sh@20 -- # IFS=: 00:07:24.275 ************************************ 00:07:24.275 END TEST accel_deomp_full_mthread 00:07:24.275 ************************************ 00:07:24.275 07:13:25 -- accel/accel.sh@20 -- # read -r var val 00:07:24.275 07:13:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:24.275 07:13:25 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:24.275 07:13:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.275 00:07:24.275 real 0m2.858s 00:07:24.275 user 0m2.429s 00:07:24.275 sys 0m0.229s 00:07:24.275 07:13:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.275 07:13:25 -- common/autotest_common.sh@10 -- # set +x 00:07:24.275 07:13:25 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:24.275 07:13:25 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:24.275 07:13:25 -- accel/accel.sh@129 -- # build_accel_config 00:07:24.275 07:13:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.275 07:13:25 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:24.275 07:13:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.275 07:13:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:24.275 07:13:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.275 07:13:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.275 07:13:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.275 07:13:25 -- common/autotest_common.sh@10 -- # set +x 00:07:24.275 07:13:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.275 07:13:25 -- accel/accel.sh@42 -- # jq -r . 00:07:24.275 ************************************ 00:07:24.275 START TEST accel_dif_functional_tests 00:07:24.275 ************************************ 00:07:24.275 07:13:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:24.275 [2024-11-04 07:13:25.815088] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:24.275 [2024-11-04 07:13:25.815332] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71458 ] 00:07:24.275 [2024-11-04 07:13:25.949191] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:24.275 [2024-11-04 07:13:26.008684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.275 [2024-11-04 07:13:26.008828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.275 [2024-11-04 07:13:26.008833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.275 00:07:24.275 00:07:24.275 CUnit - A unit testing framework for C - Version 2.1-3 00:07:24.275 http://cunit.sourceforge.net/ 00:07:24.275 00:07:24.275 00:07:24.275 Suite: accel_dif 00:07:24.275 Test: verify: DIF generated, GUARD check ...passed 00:07:24.275 Test: verify: DIF generated, APPTAG check ...passed 00:07:24.275 Test: verify: DIF generated, REFTAG check ...passed 00:07:24.275 Test: verify: DIF not generated, GUARD check ...[2024-11-04 07:13:26.096941] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:24.275 passed 00:07:24.275 Test: verify: DIF not generated, APPTAG check ...[2024-11-04 07:13:26.097011] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:24.275 [2024-11-04 07:13:26.097053] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:24.275 passed 00:07:24.275 Test: verify: DIF not generated, REFTAG check ...[2024-11-04 07:13:26.097166] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:24.275 [2024-11-04 07:13:26.097202] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:24.275 passed 00:07:24.275 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:24.275 Test: verify: APPTAG incorrect, APPTAG check ...[2024-11-04 07:13:26.097350] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:24.275 [2024-11-04 07:13:26.097420] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:24.275 passed 00:07:24.275 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:24.275 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:24.275 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:24.275 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:24.275 Test: generate copy: DIF generated, GUARD check ...[2024-11-04 07:13:26.097788] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:24.275 passed 00:07:24.275 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:24.275 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:24.275 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:24.275 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:24.275 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:24.275 Test: generate copy: iovecs-len validate ...passed 00:07:24.275 Test: generate copy: buffer alignment validate ...[2024-11-04 07:13:26.098412] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:24.275 passed 00:07:24.275 00:07:24.275 Run Summary: Type Total Ran Passed Failed Inactive 00:07:24.275 suites 1 1 n/a 0 0 00:07:24.275 tests 20 20 20 0 0 00:07:24.275 asserts 204 204 204 0 n/a 00:07:24.275 00:07:24.275 Elapsed time = 0.005 seconds 00:07:24.534 ************************************ 00:07:24.534 END TEST accel_dif_functional_tests 00:07:24.534 ************************************ 00:07:24.534 00:07:24.534 real 0m0.514s 00:07:24.534 user 0m0.686s 00:07:24.534 sys 0m0.157s 00:07:24.534 07:13:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.534 07:13:26 -- common/autotest_common.sh@10 -- # set +x 00:07:24.534 00:07:24.534 real 1m0.662s 00:07:24.534 user 1m4.777s 00:07:24.534 sys 0m6.201s 00:07:24.534 07:13:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.534 ************************************ 00:07:24.534 END TEST accel 00:07:24.534 ************************************ 00:07:24.534 07:13:26 -- common/autotest_common.sh@10 -- # set +x 00:07:24.534 07:13:26 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:24.534 07:13:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:24.534 07:13:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:24.534 07:13:26 -- common/autotest_common.sh@10 -- # set +x 00:07:24.793 ************************************ 00:07:24.793 START TEST accel_rpc 00:07:24.793 ************************************ 00:07:24.793 07:13:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:24.793 * Looking for test storage... 00:07:24.793 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:24.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.793 07:13:26 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:24.793 07:13:26 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=71527 00:07:24.793 07:13:26 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:24.793 07:13:26 -- accel/accel_rpc.sh@15 -- # waitforlisten 71527 00:07:24.793 07:13:26 -- common/autotest_common.sh@819 -- # '[' -z 71527 ']' 00:07:24.793 07:13:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.793 07:13:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:24.793 07:13:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.793 07:13:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:24.793 07:13:26 -- common/autotest_common.sh@10 -- # set +x 00:07:24.793 [2024-11-04 07:13:26.529187] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:24.793 [2024-11-04 07:13:26.529688] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71527 ] 00:07:25.052 [2024-11-04 07:13:26.668400] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.052 [2024-11-04 07:13:26.723820] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:25.052 [2024-11-04 07:13:26.724333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.986 07:13:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:25.986 07:13:27 -- common/autotest_common.sh@852 -- # return 0 00:07:25.986 07:13:27 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:25.986 07:13:27 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:25.986 07:13:27 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:25.986 07:13:27 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:25.986 07:13:27 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:25.986 07:13:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:25.986 07:13:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:25.986 07:13:27 -- common/autotest_common.sh@10 -- # set +x 00:07:25.986 ************************************ 00:07:25.986 START TEST accel_assign_opcode 00:07:25.986 ************************************ 00:07:25.986 07:13:27 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:07:25.986 07:13:27 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:25.986 07:13:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:25.986 07:13:27 -- common/autotest_common.sh@10 -- # set +x 00:07:25.986 [2024-11-04 07:13:27.501283] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:25.986 07:13:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:25.986 07:13:27 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:25.986 07:13:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:25.986 07:13:27 -- common/autotest_common.sh@10 -- # set +x 00:07:25.986 [2024-11-04 07:13:27.509297] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:25.986 07:13:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:25.986 07:13:27 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:25.986 07:13:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:25.986 07:13:27 -- common/autotest_common.sh@10 -- # set +x 00:07:25.986 07:13:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:25.986 07:13:27 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:25.986 07:13:27 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:25.986 07:13:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:25.986 07:13:27 -- common/autotest_common.sh@10 -- # set +x 00:07:25.986 07:13:27 -- accel/accel_rpc.sh@42 -- # grep software 00:07:25.986 07:13:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:25.986 software 00:07:25.986 ************************************ 00:07:25.986 END TEST accel_assign_opcode 00:07:25.986 ************************************ 00:07:25.986 00:07:25.986 real 0m0.280s 00:07:25.986 user 0m0.054s 00:07:25.986 sys 0m0.011s 00:07:25.986 07:13:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.986 07:13:27 -- common/autotest_common.sh@10 -- # set +x 00:07:25.986 07:13:27 -- accel/accel_rpc.sh@55 -- # killprocess 71527 00:07:25.986 07:13:27 -- common/autotest_common.sh@926 -- # '[' -z 71527 ']' 00:07:25.986 07:13:27 -- common/autotest_common.sh@930 -- # kill -0 71527 00:07:25.986 07:13:27 -- common/autotest_common.sh@931 -- # uname 00:07:25.986 07:13:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:26.244 07:13:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71527 00:07:26.245 killing process with pid 71527 00:07:26.245 07:13:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:26.245 07:13:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:26.245 07:13:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71527' 00:07:26.245 07:13:27 -- common/autotest_common.sh@945 -- # kill 71527 00:07:26.245 07:13:27 -- common/autotest_common.sh@950 -- # wait 71527 00:07:26.503 00:07:26.504 real 0m1.813s 00:07:26.504 user 0m1.915s 00:07:26.504 sys 0m0.437s 00:07:26.504 ************************************ 00:07:26.504 END TEST accel_rpc 00:07:26.504 ************************************ 00:07:26.504 07:13:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.504 07:13:28 -- common/autotest_common.sh@10 -- # set +x 00:07:26.504 07:13:28 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:26.504 07:13:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:26.504 07:13:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:26.504 07:13:28 -- common/autotest_common.sh@10 -- # set +x 00:07:26.504 ************************************ 00:07:26.504 START TEST app_cmdline 00:07:26.504 ************************************ 00:07:26.504 07:13:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:26.504 * Looking for test storage... 00:07:26.504 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:26.504 07:13:28 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:26.504 07:13:28 -- app/cmdline.sh@17 -- # spdk_tgt_pid=71632 00:07:26.504 07:13:28 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:26.504 07:13:28 -- app/cmdline.sh@18 -- # waitforlisten 71632 00:07:26.504 07:13:28 -- common/autotest_common.sh@819 -- # '[' -z 71632 ']' 00:07:26.504 07:13:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.504 07:13:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:26.504 07:13:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.504 07:13:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:26.504 07:13:28 -- common/autotest_common.sh@10 -- # set +x 00:07:26.763 [2024-11-04 07:13:28.382547] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:26.763 [2024-11-04 07:13:28.382882] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71632 ] 00:07:26.763 [2024-11-04 07:13:28.507556] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.763 [2024-11-04 07:13:28.565912] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:26.763 [2024-11-04 07:13:28.566368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.698 07:13:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:27.698 07:13:29 -- common/autotest_common.sh@852 -- # return 0 00:07:27.698 07:13:29 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:27.972 { 00:07:27.972 "fields": { 00:07:27.972 "commit": "726a04d70", 00:07:27.972 "major": 24, 00:07:27.972 "minor": 1, 00:07:27.972 "patch": 1, 00:07:27.972 "suffix": "-pre" 00:07:27.972 }, 00:07:27.972 "version": "SPDK v24.01.1-pre git sha1 726a04d70" 00:07:27.972 } 00:07:27.972 07:13:29 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:27.972 07:13:29 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:27.972 07:13:29 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:27.972 07:13:29 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:27.972 07:13:29 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:27.972 07:13:29 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:27.972 07:13:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:27.972 07:13:29 -- common/autotest_common.sh@10 -- # set +x 00:07:27.972 07:13:29 -- app/cmdline.sh@26 -- # sort 00:07:27.972 07:13:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:27.972 07:13:29 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:27.972 07:13:29 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:27.972 07:13:29 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:27.972 07:13:29 -- common/autotest_common.sh@640 -- # local es=0 00:07:27.972 07:13:29 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:27.972 07:13:29 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:27.972 07:13:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:27.972 07:13:29 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:27.972 07:13:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:27.972 07:13:29 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:27.973 07:13:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:27.973 07:13:29 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:27.973 07:13:29 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:27.973 07:13:29 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:28.235 2024/11/04 07:13:29 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:07:28.235 request: 00:07:28.235 { 00:07:28.235 "method": "env_dpdk_get_mem_stats", 00:07:28.235 "params": {} 00:07:28.235 } 00:07:28.235 Got JSON-RPC error response 00:07:28.235 GoRPCClient: error on JSON-RPC call 00:07:28.235 07:13:30 -- common/autotest_common.sh@643 -- # es=1 00:07:28.235 07:13:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:28.235 07:13:30 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:28.235 07:13:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:28.235 07:13:30 -- app/cmdline.sh@1 -- # killprocess 71632 00:07:28.235 07:13:30 -- common/autotest_common.sh@926 -- # '[' -z 71632 ']' 00:07:28.235 07:13:30 -- common/autotest_common.sh@930 -- # kill -0 71632 00:07:28.235 07:13:30 -- common/autotest_common.sh@931 -- # uname 00:07:28.235 07:13:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:28.235 07:13:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71632 00:07:28.235 07:13:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:28.235 07:13:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:28.235 killing process with pid 71632 00:07:28.235 07:13:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71632' 00:07:28.235 07:13:30 -- common/autotest_common.sh@945 -- # kill 71632 00:07:28.235 07:13:30 -- common/autotest_common.sh@950 -- # wait 71632 00:07:28.802 00:07:28.802 real 0m2.139s 00:07:28.802 user 0m2.727s 00:07:28.802 sys 0m0.496s 00:07:28.802 ************************************ 00:07:28.802 END TEST app_cmdline 00:07:28.802 ************************************ 00:07:28.802 07:13:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.802 07:13:30 -- common/autotest_common.sh@10 -- # set +x 00:07:28.802 07:13:30 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:28.802 07:13:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:28.802 07:13:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:28.802 07:13:30 -- common/autotest_common.sh@10 -- # set +x 00:07:28.802 ************************************ 00:07:28.802 START TEST version 00:07:28.802 ************************************ 00:07:28.802 07:13:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:28.802 * Looking for test storage... 00:07:28.802 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:28.802 07:13:30 -- app/version.sh@17 -- # get_header_version major 00:07:28.802 07:13:30 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:28.802 07:13:30 -- app/version.sh@14 -- # cut -f2 00:07:28.802 07:13:30 -- app/version.sh@14 -- # tr -d '"' 00:07:28.802 07:13:30 -- app/version.sh@17 -- # major=24 00:07:28.802 07:13:30 -- app/version.sh@18 -- # get_header_version minor 00:07:28.802 07:13:30 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:28.802 07:13:30 -- app/version.sh@14 -- # cut -f2 00:07:28.802 07:13:30 -- app/version.sh@14 -- # tr -d '"' 00:07:28.802 07:13:30 -- app/version.sh@18 -- # minor=1 00:07:28.802 07:13:30 -- app/version.sh@19 -- # get_header_version patch 00:07:28.802 07:13:30 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:28.802 07:13:30 -- app/version.sh@14 -- # cut -f2 00:07:28.802 07:13:30 -- app/version.sh@14 -- # tr -d '"' 00:07:28.802 07:13:30 -- app/version.sh@19 -- # patch=1 00:07:28.802 07:13:30 -- app/version.sh@20 -- # get_header_version suffix 00:07:28.802 07:13:30 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:28.802 07:13:30 -- app/version.sh@14 -- # tr -d '"' 00:07:28.802 07:13:30 -- app/version.sh@14 -- # cut -f2 00:07:28.802 07:13:30 -- app/version.sh@20 -- # suffix=-pre 00:07:28.802 07:13:30 -- app/version.sh@22 -- # version=24.1 00:07:28.802 07:13:30 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:28.802 07:13:30 -- app/version.sh@25 -- # version=24.1.1 00:07:28.802 07:13:30 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:28.802 07:13:30 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:28.802 07:13:30 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:28.802 07:13:30 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:28.802 07:13:30 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:28.802 00:07:28.802 real 0m0.155s 00:07:28.802 user 0m0.092s 00:07:28.803 sys 0m0.099s 00:07:28.803 07:13:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.803 07:13:30 -- common/autotest_common.sh@10 -- # set +x 00:07:28.803 ************************************ 00:07:28.803 END TEST version 00:07:28.803 ************************************ 00:07:28.803 07:13:30 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:07:29.061 07:13:30 -- spdk/autotest.sh@204 -- # uname -s 00:07:29.061 07:13:30 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:07:29.061 07:13:30 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:29.061 07:13:30 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:29.061 07:13:30 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:07:29.061 07:13:30 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:07:29.061 07:13:30 -- spdk/autotest.sh@268 -- # timing_exit lib 00:07:29.061 07:13:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:29.061 07:13:30 -- common/autotest_common.sh@10 -- # set +x 00:07:29.061 07:13:30 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:29.061 07:13:30 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:07:29.061 07:13:30 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:07:29.061 07:13:30 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:07:29.061 07:13:30 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:07:29.061 07:13:30 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:07:29.061 07:13:30 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:29.061 07:13:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:29.061 07:13:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:29.061 07:13:30 -- common/autotest_common.sh@10 -- # set +x 00:07:29.062 ************************************ 00:07:29.062 START TEST nvmf_tcp 00:07:29.062 ************************************ 00:07:29.062 07:13:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:29.062 * Looking for test storage... 00:07:29.062 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:29.062 07:13:30 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:29.062 07:13:30 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:29.062 07:13:30 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:29.062 07:13:30 -- nvmf/common.sh@7 -- # uname -s 00:07:29.062 07:13:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.062 07:13:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.062 07:13:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.062 07:13:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.062 07:13:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.062 07:13:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.062 07:13:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.062 07:13:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.062 07:13:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.062 07:13:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.062 07:13:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:07:29.062 07:13:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:07:29.062 07:13:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.062 07:13:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.062 07:13:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:29.062 07:13:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:29.062 07:13:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.062 07:13:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.062 07:13:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.062 07:13:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.062 07:13:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.062 07:13:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.062 07:13:30 -- paths/export.sh@5 -- # export PATH 00:07:29.062 07:13:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.062 07:13:30 -- nvmf/common.sh@46 -- # : 0 00:07:29.062 07:13:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:29.062 07:13:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:29.062 07:13:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:29.062 07:13:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.062 07:13:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.062 07:13:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:29.062 07:13:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:29.062 07:13:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:29.062 07:13:30 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:29.062 07:13:30 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:29.062 07:13:30 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:29.062 07:13:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:29.062 07:13:30 -- common/autotest_common.sh@10 -- # set +x 00:07:29.062 07:13:30 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:29.062 07:13:30 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:29.062 07:13:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:29.062 07:13:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:29.062 07:13:30 -- common/autotest_common.sh@10 -- # set +x 00:07:29.062 ************************************ 00:07:29.062 START TEST nvmf_example 00:07:29.062 ************************************ 00:07:29.062 07:13:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:29.062 * Looking for test storage... 00:07:29.062 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:29.062 07:13:30 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:29.321 07:13:30 -- nvmf/common.sh@7 -- # uname -s 00:07:29.321 07:13:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.321 07:13:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.321 07:13:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.321 07:13:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.321 07:13:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.321 07:13:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.321 07:13:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.321 07:13:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.321 07:13:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.321 07:13:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.321 07:13:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:07:29.321 07:13:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:07:29.321 07:13:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.321 07:13:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.321 07:13:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:29.321 07:13:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:29.321 07:13:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.321 07:13:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.321 07:13:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.321 07:13:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.321 07:13:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.322 07:13:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.322 07:13:30 -- paths/export.sh@5 -- # export PATH 00:07:29.322 07:13:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.322 07:13:30 -- nvmf/common.sh@46 -- # : 0 00:07:29.322 07:13:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:29.322 07:13:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:29.322 07:13:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:29.322 07:13:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.322 07:13:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.322 07:13:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:29.322 07:13:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:29.322 07:13:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:29.322 07:13:30 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:29.322 07:13:30 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:29.322 07:13:30 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:29.322 07:13:30 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:29.322 07:13:30 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:29.322 07:13:30 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:29.322 07:13:30 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:29.322 07:13:30 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:29.322 07:13:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:29.322 07:13:30 -- common/autotest_common.sh@10 -- # set +x 00:07:29.322 07:13:30 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:29.322 07:13:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:29.322 07:13:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.322 07:13:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:29.322 07:13:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:29.322 07:13:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:29.322 07:13:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.322 07:13:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:29.322 07:13:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.322 07:13:30 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:29.322 07:13:30 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:29.322 07:13:30 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:29.322 07:13:30 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:29.322 07:13:30 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:29.322 07:13:30 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:29.322 07:13:30 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:29.322 07:13:30 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:29.322 07:13:30 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:29.322 07:13:30 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:29.322 07:13:30 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:29.322 07:13:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:29.322 07:13:30 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:29.322 07:13:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:29.322 07:13:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:29.322 07:13:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:29.322 07:13:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:29.322 07:13:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:29.322 07:13:30 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:29.322 Cannot find device "nvmf_init_br" 00:07:29.322 07:13:30 -- nvmf/common.sh@153 -- # true 00:07:29.322 07:13:30 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:29.322 Cannot find device "nvmf_tgt_br" 00:07:29.322 07:13:30 -- nvmf/common.sh@154 -- # true 00:07:29.322 07:13:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:29.322 Cannot find device "nvmf_tgt_br2" 00:07:29.322 07:13:30 -- nvmf/common.sh@155 -- # true 00:07:29.322 07:13:30 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:29.322 Cannot find device "nvmf_init_br" 00:07:29.322 07:13:30 -- nvmf/common.sh@156 -- # true 00:07:29.322 07:13:30 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:29.322 Cannot find device "nvmf_tgt_br" 00:07:29.322 07:13:30 -- nvmf/common.sh@157 -- # true 00:07:29.322 07:13:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:29.322 Cannot find device "nvmf_tgt_br2" 00:07:29.322 07:13:31 -- nvmf/common.sh@158 -- # true 00:07:29.322 07:13:31 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:29.322 Cannot find device "nvmf_br" 00:07:29.322 07:13:31 -- nvmf/common.sh@159 -- # true 00:07:29.322 07:13:31 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:29.322 Cannot find device "nvmf_init_if" 00:07:29.322 07:13:31 -- nvmf/common.sh@160 -- # true 00:07:29.322 07:13:31 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:29.322 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:29.322 07:13:31 -- nvmf/common.sh@161 -- # true 00:07:29.322 07:13:31 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:29.322 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:29.322 07:13:31 -- nvmf/common.sh@162 -- # true 00:07:29.322 07:13:31 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:29.322 07:13:31 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:29.322 07:13:31 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:29.322 07:13:31 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:29.322 07:13:31 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:29.322 07:13:31 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:29.322 07:13:31 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:29.322 07:13:31 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:29.322 07:13:31 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:29.322 07:13:31 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:29.322 07:13:31 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:29.322 07:13:31 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:29.322 07:13:31 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:29.322 07:13:31 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:29.322 07:13:31 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:29.581 07:13:31 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:29.581 07:13:31 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:29.581 07:13:31 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:29.581 07:13:31 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:29.581 07:13:31 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:29.581 07:13:31 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:29.581 07:13:31 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:29.581 07:13:31 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:29.581 07:13:31 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:29.581 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:29.581 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:07:29.581 00:07:29.581 --- 10.0.0.2 ping statistics --- 00:07:29.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.581 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:07:29.581 07:13:31 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:29.581 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:29.581 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:07:29.581 00:07:29.581 --- 10.0.0.3 ping statistics --- 00:07:29.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.581 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:07:29.581 07:13:31 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:29.581 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:29.581 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:07:29.581 00:07:29.581 --- 10.0.0.1 ping statistics --- 00:07:29.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.581 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:07:29.581 07:13:31 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:29.581 07:13:31 -- nvmf/common.sh@421 -- # return 0 00:07:29.581 07:13:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:29.581 07:13:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:29.581 07:13:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:29.581 07:13:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:29.581 07:13:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:29.581 07:13:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:29.581 07:13:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:29.581 07:13:31 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:29.581 07:13:31 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:29.581 07:13:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:29.581 07:13:31 -- common/autotest_common.sh@10 -- # set +x 00:07:29.581 07:13:31 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:29.581 07:13:31 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:29.581 07:13:31 -- target/nvmf_example.sh@34 -- # nvmfpid=71987 00:07:29.581 07:13:31 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:29.581 07:13:31 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:29.581 07:13:31 -- target/nvmf_example.sh@36 -- # waitforlisten 71987 00:07:29.581 07:13:31 -- common/autotest_common.sh@819 -- # '[' -z 71987 ']' 00:07:29.581 07:13:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.581 07:13:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:29.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.581 07:13:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.581 07:13:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:29.581 07:13:31 -- common/autotest_common.sh@10 -- # set +x 00:07:30.999 07:13:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:30.999 07:13:32 -- common/autotest_common.sh@852 -- # return 0 00:07:30.999 07:13:32 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:30.999 07:13:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:30.999 07:13:32 -- common/autotest_common.sh@10 -- # set +x 00:07:30.999 07:13:32 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:30.999 07:13:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:30.999 07:13:32 -- common/autotest_common.sh@10 -- # set +x 00:07:30.999 07:13:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:30.999 07:13:32 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:30.999 07:13:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:30.999 07:13:32 -- common/autotest_common.sh@10 -- # set +x 00:07:30.999 07:13:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:30.999 07:13:32 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:30.999 07:13:32 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:30.999 07:13:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:30.999 07:13:32 -- common/autotest_common.sh@10 -- # set +x 00:07:30.999 07:13:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:30.999 07:13:32 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:30.999 07:13:32 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:30.999 07:13:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:30.999 07:13:32 -- common/autotest_common.sh@10 -- # set +x 00:07:30.999 07:13:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:30.999 07:13:32 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:30.999 07:13:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:30.999 07:13:32 -- common/autotest_common.sh@10 -- # set +x 00:07:30.999 07:13:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:30.999 07:13:32 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:07:30.999 07:13:32 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:40.972 Initializing NVMe Controllers 00:07:40.972 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:40.972 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:40.972 Initialization complete. Launching workers. 00:07:40.973 ======================================================== 00:07:40.973 Latency(us) 00:07:40.973 Device Information : IOPS MiB/s Average min max 00:07:40.973 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16974.69 66.31 3770.09 590.55 24872.49 00:07:40.973 ======================================================== 00:07:40.973 Total : 16974.69 66.31 3770.09 590.55 24872.49 00:07:40.973 00:07:40.973 07:13:42 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:40.973 07:13:42 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:40.973 07:13:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:40.973 07:13:42 -- nvmf/common.sh@116 -- # sync 00:07:41.231 07:13:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:41.231 07:13:42 -- nvmf/common.sh@119 -- # set +e 00:07:41.231 07:13:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:41.231 07:13:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:41.231 rmmod nvme_tcp 00:07:41.231 rmmod nvme_fabrics 00:07:41.231 rmmod nvme_keyring 00:07:41.231 07:13:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:41.231 07:13:42 -- nvmf/common.sh@123 -- # set -e 00:07:41.232 07:13:42 -- nvmf/common.sh@124 -- # return 0 00:07:41.232 07:13:42 -- nvmf/common.sh@477 -- # '[' -n 71987 ']' 00:07:41.232 07:13:42 -- nvmf/common.sh@478 -- # killprocess 71987 00:07:41.232 07:13:42 -- common/autotest_common.sh@926 -- # '[' -z 71987 ']' 00:07:41.232 07:13:42 -- common/autotest_common.sh@930 -- # kill -0 71987 00:07:41.232 07:13:42 -- common/autotest_common.sh@931 -- # uname 00:07:41.232 07:13:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:41.232 07:13:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71987 00:07:41.232 07:13:42 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:07:41.232 07:13:42 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:07:41.232 killing process with pid 71987 00:07:41.232 07:13:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71987' 00:07:41.232 07:13:42 -- common/autotest_common.sh@945 -- # kill 71987 00:07:41.232 07:13:42 -- common/autotest_common.sh@950 -- # wait 71987 00:07:41.491 nvmf threads initialize successfully 00:07:41.491 bdev subsystem init successfully 00:07:41.491 created a nvmf target service 00:07:41.491 create targets's poll groups done 00:07:41.491 all subsystems of target started 00:07:41.491 nvmf target is running 00:07:41.491 all subsystems of target stopped 00:07:41.491 destroy targets's poll groups done 00:07:41.491 destroyed the nvmf target service 00:07:41.491 bdev subsystem finish successfully 00:07:41.491 nvmf threads destroy successfully 00:07:41.491 07:13:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:41.491 07:13:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:41.491 07:13:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:41.491 07:13:43 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:41.491 07:13:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:41.491 07:13:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.491 07:13:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:41.491 07:13:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.491 07:13:43 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:41.491 07:13:43 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:41.491 07:13:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:41.491 07:13:43 -- common/autotest_common.sh@10 -- # set +x 00:07:41.491 00:07:41.491 real 0m12.361s 00:07:41.491 user 0m44.479s 00:07:41.491 sys 0m1.916s 00:07:41.491 07:13:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.491 ************************************ 00:07:41.491 END TEST nvmf_example 00:07:41.491 07:13:43 -- common/autotest_common.sh@10 -- # set +x 00:07:41.491 ************************************ 00:07:41.491 07:13:43 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:41.491 07:13:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:41.491 07:13:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:41.491 07:13:43 -- common/autotest_common.sh@10 -- # set +x 00:07:41.491 ************************************ 00:07:41.491 START TEST nvmf_filesystem 00:07:41.491 ************************************ 00:07:41.491 07:13:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:41.491 * Looking for test storage... 00:07:41.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:41.491 07:13:43 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:07:41.491 07:13:43 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:41.491 07:13:43 -- common/autotest_common.sh@34 -- # set -e 00:07:41.491 07:13:43 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:41.491 07:13:43 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:41.491 07:13:43 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:41.491 07:13:43 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:41.491 07:13:43 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:41.491 07:13:43 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:41.491 07:13:43 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:41.491 07:13:43 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:41.491 07:13:43 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:41.491 07:13:43 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:41.491 07:13:43 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:41.491 07:13:43 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:41.491 07:13:43 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:41.491 07:13:43 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:41.491 07:13:43 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:41.491 07:13:43 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:41.491 07:13:43 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:41.491 07:13:43 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:41.491 07:13:43 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:41.491 07:13:43 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:41.491 07:13:43 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:41.491 07:13:43 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:41.491 07:13:43 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:41.491 07:13:43 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:41.491 07:13:43 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:41.491 07:13:43 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:41.491 07:13:43 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:41.491 07:13:43 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:41.491 07:13:43 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:41.491 07:13:43 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:41.491 07:13:43 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:41.491 07:13:43 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:41.491 07:13:43 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:41.491 07:13:43 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:41.491 07:13:43 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:41.492 07:13:43 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:41.492 07:13:43 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:41.492 07:13:43 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:41.492 07:13:43 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:41.492 07:13:43 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:07:41.492 07:13:43 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:41.492 07:13:43 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:41.492 07:13:43 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:41.492 07:13:43 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:41.492 07:13:43 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:07:41.492 07:13:43 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:41.492 07:13:43 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:41.492 07:13:43 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:41.492 07:13:43 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:41.492 07:13:43 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:41.492 07:13:43 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:41.492 07:13:43 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:41.492 07:13:43 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:41.492 07:13:43 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:41.492 07:13:43 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:07:41.492 07:13:43 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:41.492 07:13:43 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:07:41.492 07:13:43 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:41.492 07:13:43 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:41.492 07:13:43 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:41.492 07:13:43 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:41.492 07:13:43 -- common/build_config.sh@58 -- # CONFIG_GOLANG=y 00:07:41.492 07:13:43 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:41.492 07:13:43 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:41.492 07:13:43 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:41.492 07:13:43 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:41.492 07:13:43 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:41.492 07:13:43 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:41.492 07:13:43 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:41.492 07:13:43 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:41.492 07:13:43 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:41.492 07:13:43 -- common/build_config.sh@68 -- # CONFIG_AVAHI=y 00:07:41.492 07:13:43 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:41.492 07:13:43 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:41.492 07:13:43 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:41.492 07:13:43 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:41.492 07:13:43 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:41.492 07:13:43 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:41.492 07:13:43 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:41.492 07:13:43 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:41.492 07:13:43 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:41.492 07:13:43 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:41.492 07:13:43 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:07:41.492 07:13:43 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:41.492 07:13:43 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:41.492 07:13:43 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:07:41.492 07:13:43 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:07:41.492 07:13:43 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:07:41.492 07:13:43 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:07:41.492 07:13:43 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:07:41.492 07:13:43 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:07:41.492 07:13:43 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:41.492 07:13:43 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:41.492 07:13:43 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:41.492 07:13:43 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:41.492 07:13:43 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:41.492 07:13:43 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:41.492 07:13:43 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:07:41.492 07:13:43 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:41.492 #define SPDK_CONFIG_H 00:07:41.492 #define SPDK_CONFIG_APPS 1 00:07:41.492 #define SPDK_CONFIG_ARCH native 00:07:41.492 #undef SPDK_CONFIG_ASAN 00:07:41.492 #define SPDK_CONFIG_AVAHI 1 00:07:41.492 #undef SPDK_CONFIG_CET 00:07:41.492 #define SPDK_CONFIG_COVERAGE 1 00:07:41.492 #define SPDK_CONFIG_CROSS_PREFIX 00:07:41.492 #undef SPDK_CONFIG_CRYPTO 00:07:41.492 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:41.492 #undef SPDK_CONFIG_CUSTOMOCF 00:07:41.492 #undef SPDK_CONFIG_DAOS 00:07:41.492 #define SPDK_CONFIG_DAOS_DIR 00:07:41.492 #define SPDK_CONFIG_DEBUG 1 00:07:41.492 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:41.492 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:07:41.492 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:07:41.492 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:07:41.492 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:41.492 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:41.492 #define SPDK_CONFIG_EXAMPLES 1 00:07:41.492 #undef SPDK_CONFIG_FC 00:07:41.492 #define SPDK_CONFIG_FC_PATH 00:07:41.492 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:41.492 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:41.492 #undef SPDK_CONFIG_FUSE 00:07:41.492 #undef SPDK_CONFIG_FUZZER 00:07:41.492 #define SPDK_CONFIG_FUZZER_LIB 00:07:41.492 #define SPDK_CONFIG_GOLANG 1 00:07:41.492 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:41.492 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:41.492 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:41.492 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:41.492 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:41.492 #define SPDK_CONFIG_IDXD 1 00:07:41.492 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:41.492 #undef SPDK_CONFIG_IPSEC_MB 00:07:41.492 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:41.492 #define SPDK_CONFIG_ISAL 1 00:07:41.492 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:41.492 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:41.492 #define SPDK_CONFIG_LIBDIR 00:07:41.492 #undef SPDK_CONFIG_LTO 00:07:41.492 #define SPDK_CONFIG_MAX_LCORES 00:07:41.492 #define SPDK_CONFIG_NVME_CUSE 1 00:07:41.492 #undef SPDK_CONFIG_OCF 00:07:41.492 #define SPDK_CONFIG_OCF_PATH 00:07:41.492 #define SPDK_CONFIG_OPENSSL_PATH 00:07:41.492 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:41.492 #undef SPDK_CONFIG_PGO_USE 00:07:41.492 #define SPDK_CONFIG_PREFIX /usr/local 00:07:41.492 #undef SPDK_CONFIG_RAID5F 00:07:41.492 #undef SPDK_CONFIG_RBD 00:07:41.492 #define SPDK_CONFIG_RDMA 1 00:07:41.492 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:41.492 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:41.492 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:41.492 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:41.492 #define SPDK_CONFIG_SHARED 1 00:07:41.492 #undef SPDK_CONFIG_SMA 00:07:41.492 #define SPDK_CONFIG_TESTS 1 00:07:41.492 #undef SPDK_CONFIG_TSAN 00:07:41.492 #define SPDK_CONFIG_UBLK 1 00:07:41.492 #define SPDK_CONFIG_UBSAN 1 00:07:41.492 #undef SPDK_CONFIG_UNIT_TESTS 00:07:41.492 #undef SPDK_CONFIG_URING 00:07:41.492 #define SPDK_CONFIG_URING_PATH 00:07:41.492 #undef SPDK_CONFIG_URING_ZNS 00:07:41.492 #define SPDK_CONFIG_USDT 1 00:07:41.492 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:41.492 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:41.492 #undef SPDK_CONFIG_VFIO_USER 00:07:41.492 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:41.492 #define SPDK_CONFIG_VHOST 1 00:07:41.492 #define SPDK_CONFIG_VIRTIO 1 00:07:41.492 #undef SPDK_CONFIG_VTUNE 00:07:41.492 #define SPDK_CONFIG_VTUNE_DIR 00:07:41.492 #define SPDK_CONFIG_WERROR 1 00:07:41.492 #define SPDK_CONFIG_WPDK_DIR 00:07:41.492 #undef SPDK_CONFIG_XNVME 00:07:41.492 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:41.492 07:13:43 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:41.492 07:13:43 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:41.492 07:13:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.492 07:13:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.492 07:13:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.492 07:13:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.492 07:13:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.492 07:13:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.492 07:13:43 -- paths/export.sh@5 -- # export PATH 00:07:41.493 07:13:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.493 07:13:43 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:41.493 07:13:43 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:41.753 07:13:43 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:41.753 07:13:43 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:41.753 07:13:43 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:07:41.753 07:13:43 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:07:41.753 07:13:43 -- pm/common@16 -- # TEST_TAG=N/A 00:07:41.753 07:13:43 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:07:41.753 07:13:43 -- common/autotest_common.sh@52 -- # : 1 00:07:41.753 07:13:43 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:07:41.753 07:13:43 -- common/autotest_common.sh@56 -- # : 0 00:07:41.753 07:13:43 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:41.753 07:13:43 -- common/autotest_common.sh@58 -- # : 0 00:07:41.753 07:13:43 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:07:41.753 07:13:43 -- common/autotest_common.sh@60 -- # : 1 00:07:41.753 07:13:43 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:41.753 07:13:43 -- common/autotest_common.sh@62 -- # : 0 00:07:41.753 07:13:43 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:07:41.753 07:13:43 -- common/autotest_common.sh@64 -- # : 00:07:41.753 07:13:43 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:07:41.753 07:13:43 -- common/autotest_common.sh@66 -- # : 0 00:07:41.753 07:13:43 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:07:41.753 07:13:43 -- common/autotest_common.sh@68 -- # : 0 00:07:41.753 07:13:43 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:07:41.753 07:13:43 -- common/autotest_common.sh@70 -- # : 0 00:07:41.753 07:13:43 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:07:41.753 07:13:43 -- common/autotest_common.sh@72 -- # : 0 00:07:41.753 07:13:43 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:41.753 07:13:43 -- common/autotest_common.sh@74 -- # : 0 00:07:41.753 07:13:43 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:07:41.753 07:13:43 -- common/autotest_common.sh@76 -- # : 0 00:07:41.753 07:13:43 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:07:41.753 07:13:43 -- common/autotest_common.sh@78 -- # : 0 00:07:41.753 07:13:43 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:07:41.753 07:13:43 -- common/autotest_common.sh@80 -- # : 0 00:07:41.753 07:13:43 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:07:41.753 07:13:43 -- common/autotest_common.sh@82 -- # : 0 00:07:41.753 07:13:43 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:07:41.753 07:13:43 -- common/autotest_common.sh@84 -- # : 0 00:07:41.753 07:13:43 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:07:41.753 07:13:43 -- common/autotest_common.sh@86 -- # : 1 00:07:41.753 07:13:43 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:07:41.753 07:13:43 -- common/autotest_common.sh@88 -- # : 0 00:07:41.753 07:13:43 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:07:41.753 07:13:43 -- common/autotest_common.sh@90 -- # : 0 00:07:41.753 07:13:43 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:41.753 07:13:43 -- common/autotest_common.sh@92 -- # : 0 00:07:41.753 07:13:43 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:07:41.753 07:13:43 -- common/autotest_common.sh@94 -- # : 0 00:07:41.753 07:13:43 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:07:41.753 07:13:43 -- common/autotest_common.sh@96 -- # : tcp 00:07:41.753 07:13:43 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:41.753 07:13:43 -- common/autotest_common.sh@98 -- # : 0 00:07:41.753 07:13:43 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:07:41.753 07:13:43 -- common/autotest_common.sh@100 -- # : 0 00:07:41.753 07:13:43 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:07:41.753 07:13:43 -- common/autotest_common.sh@102 -- # : 0 00:07:41.753 07:13:43 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:07:41.753 07:13:43 -- common/autotest_common.sh@104 -- # : 0 00:07:41.753 07:13:43 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:07:41.753 07:13:43 -- common/autotest_common.sh@106 -- # : 0 00:07:41.753 07:13:43 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:07:41.753 07:13:43 -- common/autotest_common.sh@108 -- # : 0 00:07:41.753 07:13:43 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:07:41.753 07:13:43 -- common/autotest_common.sh@110 -- # : 0 00:07:41.753 07:13:43 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:07:41.753 07:13:43 -- common/autotest_common.sh@112 -- # : 0 00:07:41.753 07:13:43 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:41.753 07:13:43 -- common/autotest_common.sh@114 -- # : 0 00:07:41.753 07:13:43 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:07:41.753 07:13:43 -- common/autotest_common.sh@116 -- # : 1 00:07:41.753 07:13:43 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:07:41.753 07:13:43 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:07:41.753 07:13:43 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:41.753 07:13:43 -- common/autotest_common.sh@120 -- # : 0 00:07:41.753 07:13:43 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:07:41.753 07:13:43 -- common/autotest_common.sh@122 -- # : 0 00:07:41.753 07:13:43 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:07:41.753 07:13:43 -- common/autotest_common.sh@124 -- # : 0 00:07:41.753 07:13:43 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:07:41.753 07:13:43 -- common/autotest_common.sh@126 -- # : 0 00:07:41.753 07:13:43 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:07:41.753 07:13:43 -- common/autotest_common.sh@128 -- # : 0 00:07:41.753 07:13:43 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:07:41.753 07:13:43 -- common/autotest_common.sh@130 -- # : 0 00:07:41.753 07:13:43 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:07:41.754 07:13:43 -- common/autotest_common.sh@132 -- # : v23.11 00:07:41.754 07:13:43 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:07:41.754 07:13:43 -- common/autotest_common.sh@134 -- # : true 00:07:41.754 07:13:43 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:07:41.754 07:13:43 -- common/autotest_common.sh@136 -- # : 0 00:07:41.754 07:13:43 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:07:41.754 07:13:43 -- common/autotest_common.sh@138 -- # : 0 00:07:41.754 07:13:43 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:07:41.754 07:13:43 -- common/autotest_common.sh@140 -- # : 1 00:07:41.754 07:13:43 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:07:41.754 07:13:43 -- common/autotest_common.sh@142 -- # : 0 00:07:41.754 07:13:43 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:07:41.754 07:13:43 -- common/autotest_common.sh@144 -- # : 0 00:07:41.754 07:13:43 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:07:41.754 07:13:43 -- common/autotest_common.sh@146 -- # : 0 00:07:41.754 07:13:43 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:07:41.754 07:13:43 -- common/autotest_common.sh@148 -- # : 00:07:41.754 07:13:43 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:07:41.754 07:13:43 -- common/autotest_common.sh@150 -- # : 0 00:07:41.754 07:13:43 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:07:41.754 07:13:43 -- common/autotest_common.sh@152 -- # : 0 00:07:41.754 07:13:43 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:07:41.754 07:13:43 -- common/autotest_common.sh@154 -- # : 0 00:07:41.754 07:13:43 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:07:41.754 07:13:43 -- common/autotest_common.sh@156 -- # : 0 00:07:41.754 07:13:43 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:07:41.754 07:13:43 -- common/autotest_common.sh@158 -- # : 0 00:07:41.754 07:13:43 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:07:41.754 07:13:43 -- common/autotest_common.sh@160 -- # : 0 00:07:41.754 07:13:43 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:07:41.754 07:13:43 -- common/autotest_common.sh@163 -- # : 00:07:41.754 07:13:43 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:07:41.754 07:13:43 -- common/autotest_common.sh@165 -- # : 1 00:07:41.754 07:13:43 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:07:41.754 07:13:43 -- common/autotest_common.sh@167 -- # : 1 00:07:41.754 07:13:43 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:41.754 07:13:43 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:41.754 07:13:43 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:41.754 07:13:43 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:41.754 07:13:43 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:41.754 07:13:43 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:41.754 07:13:43 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:41.754 07:13:43 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:41.754 07:13:43 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:41.754 07:13:43 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:41.754 07:13:43 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:41.754 07:13:43 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:41.754 07:13:43 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:41.754 07:13:43 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:41.754 07:13:43 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:07:41.754 07:13:43 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:41.754 07:13:43 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:41.754 07:13:43 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:41.754 07:13:43 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:41.754 07:13:43 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:41.754 07:13:43 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:07:41.754 07:13:43 -- common/autotest_common.sh@196 -- # cat 00:07:41.754 07:13:43 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:07:41.754 07:13:43 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:41.754 07:13:43 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:41.754 07:13:43 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:41.754 07:13:43 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:41.754 07:13:43 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:07:41.754 07:13:43 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:07:41.754 07:13:43 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:41.754 07:13:43 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:41.754 07:13:43 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:41.754 07:13:43 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:41.754 07:13:43 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:41.754 07:13:43 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:41.754 07:13:43 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:41.754 07:13:43 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:41.754 07:13:43 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:41.754 07:13:43 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:41.754 07:13:43 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:41.754 07:13:43 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:41.754 07:13:43 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:07:41.754 07:13:43 -- common/autotest_common.sh@249 -- # export valgrind= 00:07:41.754 07:13:43 -- common/autotest_common.sh@249 -- # valgrind= 00:07:41.754 07:13:43 -- common/autotest_common.sh@255 -- # uname -s 00:07:41.754 07:13:43 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:07:41.754 07:13:43 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:07:41.754 07:13:43 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:07:41.754 07:13:43 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:07:41.754 07:13:43 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:07:41.754 07:13:43 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:07:41.754 07:13:43 -- common/autotest_common.sh@265 -- # MAKE=make 00:07:41.754 07:13:43 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:07:41.754 07:13:43 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:07:41.754 07:13:43 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:07:41.754 07:13:43 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:07:41.754 07:13:43 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:07:41.754 07:13:43 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:07:41.754 07:13:43 -- common/autotest_common.sh@291 -- # for i in "$@" 00:07:41.754 07:13:43 -- common/autotest_common.sh@292 -- # case "$i" in 00:07:41.754 07:13:43 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:07:41.754 07:13:43 -- common/autotest_common.sh@309 -- # [[ -z 72229 ]] 00:07:41.754 07:13:43 -- common/autotest_common.sh@309 -- # kill -0 72229 00:07:41.754 07:13:43 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:07:41.754 07:13:43 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:07:41.754 07:13:43 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:07:41.754 07:13:43 -- common/autotest_common.sh@322 -- # local mount target_dir 00:07:41.754 07:13:43 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:07:41.754 07:13:43 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:07:41.754 07:13:43 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:07:41.754 07:13:43 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:07:41.754 07:13:43 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.o53Rqp 00:07:41.754 07:13:43 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:41.754 07:13:43 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:07:41.754 07:13:43 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:07:41.754 07:13:43 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.o53Rqp/tests/target /tmp/spdk.o53Rqp 00:07:41.754 07:13:43 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:07:41.754 07:13:43 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:41.754 07:13:43 -- common/autotest_common.sh@318 -- # df -T 00:07:41.754 07:13:43 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:07:41.754 07:13:43 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda5 00:07:41.754 07:13:43 -- common/autotest_common.sh@352 -- # fss["$mount"]=btrfs 00:07:41.754 07:13:43 -- common/autotest_common.sh@353 -- # avails["$mount"]=13300465664 00:07:41.754 07:13:43 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20314062848 00:07:41.755 07:13:43 -- common/autotest_common.sh@354 -- # uses["$mount"]=6283116544 00:07:41.755 07:13:43 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:41.755 07:13:43 -- common/autotest_common.sh@352 -- # mounts["$mount"]=devtmpfs 00:07:41.755 07:13:43 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:07:41.755 07:13:43 -- common/autotest_common.sh@353 -- # avails["$mount"]=4194304 00:07:41.755 07:13:43 -- common/autotest_common.sh@353 -- # sizes["$mount"]=4194304 00:07:41.755 07:13:43 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:07:41.755 07:13:43 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:41.755 07:13:43 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:41.755 07:13:43 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:41.755 07:13:43 -- common/autotest_common.sh@353 -- # avails["$mount"]=6265167872 00:07:41.755 07:13:43 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6266425344 00:07:41.755 07:13:43 -- common/autotest_common.sh@354 -- # uses["$mount"]=1257472 00:07:41.755 07:13:43 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:41.755 07:13:43 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:41.755 07:13:43 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:41.755 07:13:43 -- common/autotest_common.sh@353 -- # avails["$mount"]=2493755392 00:07:41.755 07:13:43 -- common/autotest_common.sh@353 -- # sizes["$mount"]=2506571776 00:07:41.755 07:13:43 -- common/autotest_common.sh@354 -- # uses["$mount"]=12816384 00:07:41.755 07:13:43 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:41.755 07:13:43 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda5 00:07:41.755 07:13:43 -- common/autotest_common.sh@352 -- # fss["$mount"]=btrfs 00:07:41.755 07:13:43 -- common/autotest_common.sh@353 -- # avails["$mount"]=13300465664 00:07:41.755 07:13:43 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20314062848 00:07:41.755 07:13:43 -- common/autotest_common.sh@354 -- # uses["$mount"]=6283116544 00:07:41.755 07:13:43 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:41.755 07:13:43 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda2 00:07:41.755 07:13:43 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:07:41.755 07:13:43 -- common/autotest_common.sh@353 -- # avails["$mount"]=840085504 00:07:41.755 07:13:43 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1012768768 00:07:41.755 07:13:43 -- common/autotest_common.sh@354 -- # uses["$mount"]=103477248 00:07:41.755 07:13:43 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:41.755 07:13:43 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:41.755 07:13:43 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:41.755 07:13:43 -- common/autotest_common.sh@353 -- # avails["$mount"]=6266290176 00:07:41.755 07:13:43 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6266429440 00:07:41.755 07:13:43 -- common/autotest_common.sh@354 -- # uses["$mount"]=139264 00:07:41.755 07:13:43 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:41.755 07:13:43 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda3 00:07:41.755 07:13:43 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:07:41.755 07:13:43 -- common/autotest_common.sh@353 -- # avails["$mount"]=91617280 00:07:41.755 07:13:43 -- common/autotest_common.sh@353 -- # sizes["$mount"]=104607744 00:07:41.755 07:13:43 -- common/autotest_common.sh@354 -- # uses["$mount"]=12990464 00:07:41.755 07:13:43 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:41.755 07:13:43 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:41.755 07:13:43 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:41.755 07:13:43 -- common/autotest_common.sh@353 -- # avails["$mount"]=1253269504 00:07:41.755 07:13:43 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1253281792 00:07:41.755 07:13:43 -- common/autotest_common.sh@354 -- # uses["$mount"]=12288 00:07:41.755 07:13:43 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:41.755 07:13:43 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output 00:07:41.755 07:13:43 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:07:41.755 07:13:43 -- common/autotest_common.sh@353 -- # avails["$mount"]=98374434816 00:07:41.755 07:13:43 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:07:41.755 07:13:43 -- common/autotest_common.sh@354 -- # uses["$mount"]=1328345088 00:07:41.755 07:13:43 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:41.755 07:13:43 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:07:41.755 * Looking for test storage... 00:07:41.755 07:13:43 -- common/autotest_common.sh@359 -- # local target_space new_size 00:07:41.755 07:13:43 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:07:41.755 07:13:43 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:41.755 07:13:43 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:41.755 07:13:43 -- common/autotest_common.sh@363 -- # mount=/home 00:07:41.755 07:13:43 -- common/autotest_common.sh@365 -- # target_space=13300465664 00:07:41.755 07:13:43 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:07:41.755 07:13:43 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:07:41.755 07:13:43 -- common/autotest_common.sh@371 -- # [[ btrfs == tmpfs ]] 00:07:41.755 07:13:43 -- common/autotest_common.sh@371 -- # [[ btrfs == ramfs ]] 00:07:41.755 07:13:43 -- common/autotest_common.sh@371 -- # [[ /home == / ]] 00:07:41.755 07:13:43 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:41.755 07:13:43 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:41.755 07:13:43 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:41.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:41.755 07:13:43 -- common/autotest_common.sh@380 -- # return 0 00:07:41.755 07:13:43 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:07:41.755 07:13:43 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:07:41.755 07:13:43 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:41.755 07:13:43 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:41.755 07:13:43 -- common/autotest_common.sh@1672 -- # true 00:07:41.755 07:13:43 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:07:41.755 07:13:43 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:41.755 07:13:43 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:41.755 07:13:43 -- common/autotest_common.sh@27 -- # exec 00:07:41.755 07:13:43 -- common/autotest_common.sh@29 -- # exec 00:07:41.755 07:13:43 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:41.755 07:13:43 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:41.755 07:13:43 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:41.755 07:13:43 -- common/autotest_common.sh@18 -- # set -x 00:07:41.755 07:13:43 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:41.755 07:13:43 -- nvmf/common.sh@7 -- # uname -s 00:07:41.755 07:13:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:41.755 07:13:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:41.755 07:13:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:41.755 07:13:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:41.755 07:13:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:41.755 07:13:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:41.755 07:13:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:41.755 07:13:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:41.755 07:13:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:41.755 07:13:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:41.755 07:13:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:07:41.755 07:13:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:07:41.755 07:13:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:41.755 07:13:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:41.755 07:13:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:41.755 07:13:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:41.755 07:13:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.755 07:13:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.755 07:13:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.755 07:13:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.755 07:13:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.755 07:13:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.755 07:13:43 -- paths/export.sh@5 -- # export PATH 00:07:41.755 07:13:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.755 07:13:43 -- nvmf/common.sh@46 -- # : 0 00:07:41.755 07:13:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:41.755 07:13:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:41.755 07:13:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:41.755 07:13:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:41.755 07:13:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:41.755 07:13:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:41.755 07:13:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:41.755 07:13:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:41.756 07:13:43 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:41.756 07:13:43 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:41.756 07:13:43 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:41.756 07:13:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:41.756 07:13:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:41.756 07:13:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:41.756 07:13:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:41.756 07:13:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:41.756 07:13:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.756 07:13:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:41.756 07:13:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.756 07:13:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:41.756 07:13:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:41.756 07:13:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:41.756 07:13:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:41.756 07:13:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:41.756 07:13:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:41.756 07:13:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:41.756 07:13:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:41.756 07:13:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:41.756 07:13:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:41.756 07:13:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:41.756 07:13:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:41.756 07:13:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:41.756 07:13:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:41.756 07:13:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:41.756 07:13:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:41.756 07:13:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:41.756 07:13:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:41.756 07:13:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:41.756 07:13:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:41.756 Cannot find device "nvmf_tgt_br" 00:07:41.756 07:13:43 -- nvmf/common.sh@154 -- # true 00:07:41.756 07:13:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:41.756 Cannot find device "nvmf_tgt_br2" 00:07:41.756 07:13:43 -- nvmf/common.sh@155 -- # true 00:07:41.756 07:13:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:41.756 07:13:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:41.756 Cannot find device "nvmf_tgt_br" 00:07:41.756 07:13:43 -- nvmf/common.sh@157 -- # true 00:07:41.756 07:13:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:41.756 Cannot find device "nvmf_tgt_br2" 00:07:41.756 07:13:43 -- nvmf/common.sh@158 -- # true 00:07:41.756 07:13:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:41.756 07:13:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:41.756 07:13:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:41.756 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:41.756 07:13:43 -- nvmf/common.sh@161 -- # true 00:07:41.756 07:13:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:41.756 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:41.756 07:13:43 -- nvmf/common.sh@162 -- # true 00:07:41.756 07:13:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:41.756 07:13:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:41.756 07:13:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:42.015 07:13:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:42.015 07:13:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:42.015 07:13:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:42.015 07:13:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:42.015 07:13:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:42.015 07:13:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:42.015 07:13:43 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:42.015 07:13:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:42.015 07:13:43 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:42.015 07:13:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:42.015 07:13:43 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:42.015 07:13:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:42.015 07:13:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:42.015 07:13:43 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:42.015 07:13:43 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:42.015 07:13:43 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:42.015 07:13:43 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:42.015 07:13:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:42.015 07:13:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:42.015 07:13:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:42.015 07:13:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:42.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:42.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:07:42.015 00:07:42.015 --- 10.0.0.2 ping statistics --- 00:07:42.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.015 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:07:42.015 07:13:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:42.015 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:42.015 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:07:42.015 00:07:42.015 --- 10.0.0.3 ping statistics --- 00:07:42.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.015 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:07:42.015 07:13:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:42.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:42.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:07:42.015 00:07:42.015 --- 10.0.0.1 ping statistics --- 00:07:42.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.015 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:07:42.015 07:13:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:42.015 07:13:43 -- nvmf/common.sh@421 -- # return 0 00:07:42.015 07:13:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:42.015 07:13:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:42.015 07:13:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:42.015 07:13:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:42.015 07:13:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:42.015 07:13:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:42.015 07:13:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:42.015 07:13:43 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:42.015 07:13:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:42.015 07:13:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:42.015 07:13:43 -- common/autotest_common.sh@10 -- # set +x 00:07:42.015 ************************************ 00:07:42.015 START TEST nvmf_filesystem_no_in_capsule 00:07:42.015 ************************************ 00:07:42.015 07:13:43 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:07:42.015 07:13:43 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:42.015 07:13:43 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:42.015 07:13:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:42.015 07:13:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:42.015 07:13:43 -- common/autotest_common.sh@10 -- # set +x 00:07:42.015 07:13:43 -- nvmf/common.sh@469 -- # nvmfpid=72391 00:07:42.015 07:13:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:42.015 07:13:43 -- nvmf/common.sh@470 -- # waitforlisten 72391 00:07:42.015 07:13:43 -- common/autotest_common.sh@819 -- # '[' -z 72391 ']' 00:07:42.015 07:13:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.015 07:13:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:42.015 07:13:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.015 07:13:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:42.015 07:13:43 -- common/autotest_common.sh@10 -- # set +x 00:07:42.015 [2024-11-04 07:13:43.852272] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:42.015 [2024-11-04 07:13:43.852369] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:42.274 [2024-11-04 07:13:43.992570] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:42.274 [2024-11-04 07:13:44.051843] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:42.274 [2024-11-04 07:13:44.051988] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:42.274 [2024-11-04 07:13:44.052000] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:42.274 [2024-11-04 07:13:44.052008] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:42.274 [2024-11-04 07:13:44.052143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.274 [2024-11-04 07:13:44.052305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.274 [2024-11-04 07:13:44.052763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.274 [2024-11-04 07:13:44.052851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:43.210 07:13:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:43.210 07:13:44 -- common/autotest_common.sh@852 -- # return 0 00:07:43.210 07:13:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:43.210 07:13:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:43.210 07:13:44 -- common/autotest_common.sh@10 -- # set +x 00:07:43.210 07:13:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:43.210 07:13:44 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:43.210 07:13:44 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:43.210 07:13:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:43.210 07:13:44 -- common/autotest_common.sh@10 -- # set +x 00:07:43.210 [2024-11-04 07:13:44.825291] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:43.210 07:13:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:43.210 07:13:44 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:43.210 07:13:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:43.210 07:13:44 -- common/autotest_common.sh@10 -- # set +x 00:07:43.210 Malloc1 00:07:43.210 07:13:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:43.210 07:13:44 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:43.210 07:13:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:43.210 07:13:44 -- common/autotest_common.sh@10 -- # set +x 00:07:43.210 07:13:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:43.210 07:13:45 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:43.210 07:13:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:43.210 07:13:45 -- common/autotest_common.sh@10 -- # set +x 00:07:43.210 07:13:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:43.210 07:13:45 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:43.210 07:13:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:43.210 07:13:45 -- common/autotest_common.sh@10 -- # set +x 00:07:43.210 [2024-11-04 07:13:45.015135] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:43.210 07:13:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:43.210 07:13:45 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:43.210 07:13:45 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:07:43.210 07:13:45 -- common/autotest_common.sh@1358 -- # local bdev_info 00:07:43.210 07:13:45 -- common/autotest_common.sh@1359 -- # local bs 00:07:43.210 07:13:45 -- common/autotest_common.sh@1360 -- # local nb 00:07:43.210 07:13:45 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:43.210 07:13:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:43.210 07:13:45 -- common/autotest_common.sh@10 -- # set +x 00:07:43.210 07:13:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:43.210 07:13:45 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:07:43.210 { 00:07:43.210 "aliases": [ 00:07:43.210 "cca71527-f35e-4252-b309-4d4a156a8c53" 00:07:43.210 ], 00:07:43.210 "assigned_rate_limits": { 00:07:43.210 "r_mbytes_per_sec": 0, 00:07:43.210 "rw_ios_per_sec": 0, 00:07:43.210 "rw_mbytes_per_sec": 0, 00:07:43.210 "w_mbytes_per_sec": 0 00:07:43.210 }, 00:07:43.210 "block_size": 512, 00:07:43.210 "claim_type": "exclusive_write", 00:07:43.210 "claimed": true, 00:07:43.210 "driver_specific": {}, 00:07:43.210 "memory_domains": [ 00:07:43.210 { 00:07:43.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.210 "dma_device_type": 2 00:07:43.210 } 00:07:43.210 ], 00:07:43.210 "name": "Malloc1", 00:07:43.210 "num_blocks": 1048576, 00:07:43.211 "product_name": "Malloc disk", 00:07:43.211 "supported_io_types": { 00:07:43.211 "abort": true, 00:07:43.211 "compare": false, 00:07:43.211 "compare_and_write": false, 00:07:43.211 "flush": true, 00:07:43.211 "nvme_admin": false, 00:07:43.211 "nvme_io": false, 00:07:43.211 "read": true, 00:07:43.211 "reset": true, 00:07:43.211 "unmap": true, 00:07:43.211 "write": true, 00:07:43.211 "write_zeroes": true 00:07:43.211 }, 00:07:43.211 "uuid": "cca71527-f35e-4252-b309-4d4a156a8c53", 00:07:43.211 "zoned": false 00:07:43.211 } 00:07:43.211 ]' 00:07:43.211 07:13:45 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:07:43.468 07:13:45 -- common/autotest_common.sh@1362 -- # bs=512 00:07:43.468 07:13:45 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:07:43.468 07:13:45 -- common/autotest_common.sh@1363 -- # nb=1048576 00:07:43.468 07:13:45 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:07:43.468 07:13:45 -- common/autotest_common.sh@1367 -- # echo 512 00:07:43.468 07:13:45 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:43.468 07:13:45 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:43.727 07:13:45 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:43.727 07:13:45 -- common/autotest_common.sh@1177 -- # local i=0 00:07:43.727 07:13:45 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:07:43.727 07:13:45 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:07:43.727 07:13:45 -- common/autotest_common.sh@1184 -- # sleep 2 00:07:45.630 07:13:47 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:07:45.630 07:13:47 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:07:45.630 07:13:47 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:07:45.630 07:13:47 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:07:45.630 07:13:47 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:07:45.630 07:13:47 -- common/autotest_common.sh@1187 -- # return 0 00:07:45.630 07:13:47 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:45.630 07:13:47 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:45.630 07:13:47 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:45.630 07:13:47 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:45.630 07:13:47 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:45.630 07:13:47 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:45.630 07:13:47 -- setup/common.sh@80 -- # echo 536870912 00:07:45.630 07:13:47 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:45.630 07:13:47 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:45.630 07:13:47 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:45.630 07:13:47 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:45.630 07:13:47 -- target/filesystem.sh@69 -- # partprobe 00:07:45.889 07:13:47 -- target/filesystem.sh@70 -- # sleep 1 00:07:46.824 07:13:48 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:46.824 07:13:48 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:46.824 07:13:48 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:46.824 07:13:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:46.824 07:13:48 -- common/autotest_common.sh@10 -- # set +x 00:07:46.824 ************************************ 00:07:46.824 START TEST filesystem_ext4 00:07:46.824 ************************************ 00:07:46.824 07:13:48 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:46.824 07:13:48 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:46.824 07:13:48 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:46.824 07:13:48 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:46.824 07:13:48 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:07:46.824 07:13:48 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:46.824 07:13:48 -- common/autotest_common.sh@904 -- # local i=0 00:07:46.824 07:13:48 -- common/autotest_common.sh@905 -- # local force 00:07:46.824 07:13:48 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:07:46.824 07:13:48 -- common/autotest_common.sh@908 -- # force=-F 00:07:46.824 07:13:48 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:46.824 mke2fs 1.47.0 (5-Feb-2023) 00:07:47.083 Discarding device blocks: 0/522240 done 00:07:47.083 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:47.083 Filesystem UUID: 165d943b-0a3b-4425-b1c5-85586fc76a08 00:07:47.083 Superblock backups stored on blocks: 00:07:47.083 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:47.083 00:07:47.083 Allocating group tables: 0/64 done 00:07:47.083 Writing inode tables: 0/64 done 00:07:47.083 Creating journal (8192 blocks): done 00:07:47.083 Writing superblocks and filesystem accounting information: 0/64 done 00:07:47.083 00:07:47.083 07:13:48 -- common/autotest_common.sh@921 -- # return 0 00:07:47.083 07:13:48 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:52.351 07:13:54 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:52.351 07:13:54 -- target/filesystem.sh@25 -- # sync 00:07:52.610 07:13:54 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:52.610 07:13:54 -- target/filesystem.sh@27 -- # sync 00:07:52.610 07:13:54 -- target/filesystem.sh@29 -- # i=0 00:07:52.610 07:13:54 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:52.610 07:13:54 -- target/filesystem.sh@37 -- # kill -0 72391 00:07:52.610 07:13:54 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:52.610 07:13:54 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:52.610 07:13:54 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:52.610 07:13:54 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:52.610 ************************************ 00:07:52.610 END TEST filesystem_ext4 00:07:52.610 ************************************ 00:07:52.610 00:07:52.610 real 0m5.702s 00:07:52.610 user 0m0.026s 00:07:52.610 sys 0m0.068s 00:07:52.610 07:13:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.610 07:13:54 -- common/autotest_common.sh@10 -- # set +x 00:07:52.610 07:13:54 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:52.610 07:13:54 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:52.610 07:13:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:52.610 07:13:54 -- common/autotest_common.sh@10 -- # set +x 00:07:52.610 ************************************ 00:07:52.610 START TEST filesystem_btrfs 00:07:52.610 ************************************ 00:07:52.610 07:13:54 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:52.610 07:13:54 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:52.610 07:13:54 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:52.610 07:13:54 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:52.610 07:13:54 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:07:52.610 07:13:54 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:52.610 07:13:54 -- common/autotest_common.sh@904 -- # local i=0 00:07:52.610 07:13:54 -- common/autotest_common.sh@905 -- # local force 00:07:52.610 07:13:54 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:07:52.610 07:13:54 -- common/autotest_common.sh@910 -- # force=-f 00:07:52.610 07:13:54 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:52.869 btrfs-progs v6.8.1 00:07:52.869 See https://btrfs.readthedocs.io for more information. 00:07:52.869 00:07:52.869 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:52.869 NOTE: several default settings have changed in version 5.15, please make sure 00:07:52.869 this does not affect your deployments: 00:07:52.869 - DUP for metadata (-m dup) 00:07:52.869 - enabled no-holes (-O no-holes) 00:07:52.869 - enabled free-space-tree (-R free-space-tree) 00:07:52.869 00:07:52.869 Label: (null) 00:07:52.869 UUID: bdba21ee-4336-498c-85ff-973092fcd386 00:07:52.869 Node size: 16384 00:07:52.869 Sector size: 4096 (CPU page size: 4096) 00:07:52.869 Filesystem size: 510.00MiB 00:07:52.869 Block group profiles: 00:07:52.869 Data: single 8.00MiB 00:07:52.869 Metadata: DUP 32.00MiB 00:07:52.869 System: DUP 8.00MiB 00:07:52.869 SSD detected: yes 00:07:52.869 Zoned device: no 00:07:52.869 Features: extref, skinny-metadata, no-holes, free-space-tree 00:07:52.869 Checksum: crc32c 00:07:52.869 Number of devices: 1 00:07:52.869 Devices: 00:07:52.869 ID SIZE PATH 00:07:52.869 1 510.00MiB /dev/nvme0n1p1 00:07:52.869 00:07:52.869 07:13:54 -- common/autotest_common.sh@921 -- # return 0 00:07:52.869 07:13:54 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:52.869 07:13:54 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:52.869 07:13:54 -- target/filesystem.sh@25 -- # sync 00:07:52.869 07:13:54 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:52.869 07:13:54 -- target/filesystem.sh@27 -- # sync 00:07:52.869 07:13:54 -- target/filesystem.sh@29 -- # i=0 00:07:52.869 07:13:54 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:52.869 07:13:54 -- target/filesystem.sh@37 -- # kill -0 72391 00:07:52.869 07:13:54 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:52.869 07:13:54 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:52.869 07:13:54 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:52.869 07:13:54 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:52.869 ************************************ 00:07:52.869 END TEST filesystem_btrfs 00:07:52.869 ************************************ 00:07:52.869 00:07:52.869 real 0m0.240s 00:07:52.869 user 0m0.025s 00:07:52.869 sys 0m0.059s 00:07:52.869 07:13:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.869 07:13:54 -- common/autotest_common.sh@10 -- # set +x 00:07:52.869 07:13:54 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:52.869 07:13:54 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:52.869 07:13:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:52.869 07:13:54 -- common/autotest_common.sh@10 -- # set +x 00:07:52.869 ************************************ 00:07:52.869 START TEST filesystem_xfs 00:07:52.869 ************************************ 00:07:52.869 07:13:54 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:07:52.869 07:13:54 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:52.869 07:13:54 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:52.869 07:13:54 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:52.869 07:13:54 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:07:52.869 07:13:54 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:52.869 07:13:54 -- common/autotest_common.sh@904 -- # local i=0 00:07:52.869 07:13:54 -- common/autotest_common.sh@905 -- # local force 00:07:52.869 07:13:54 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:07:52.869 07:13:54 -- common/autotest_common.sh@910 -- # force=-f 00:07:52.869 07:13:54 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:52.869 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:52.869 = sectsz=512 attr=2, projid32bit=1 00:07:52.869 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:52.869 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:52.869 data = bsize=4096 blocks=130560, imaxpct=25 00:07:52.869 = sunit=0 swidth=0 blks 00:07:52.869 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:52.869 log =internal log bsize=4096 blocks=16384, version=2 00:07:52.869 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:52.869 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:53.805 Discarding blocks...Done. 00:07:53.805 07:13:55 -- common/autotest_common.sh@921 -- # return 0 00:07:53.805 07:13:55 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:56.347 07:13:57 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:56.347 07:13:57 -- target/filesystem.sh@25 -- # sync 00:07:56.347 07:13:57 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:56.347 07:13:57 -- target/filesystem.sh@27 -- # sync 00:07:56.347 07:13:57 -- target/filesystem.sh@29 -- # i=0 00:07:56.347 07:13:57 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:56.347 07:13:57 -- target/filesystem.sh@37 -- # kill -0 72391 00:07:56.347 07:13:57 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:56.347 07:13:57 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:56.347 07:13:57 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:56.347 07:13:57 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:56.347 ************************************ 00:07:56.347 END TEST filesystem_xfs 00:07:56.347 ************************************ 00:07:56.347 00:07:56.347 real 0m3.115s 00:07:56.347 user 0m0.024s 00:07:56.347 sys 0m0.068s 00:07:56.347 07:13:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.347 07:13:57 -- common/autotest_common.sh@10 -- # set +x 00:07:56.347 07:13:57 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:56.347 07:13:57 -- target/filesystem.sh@93 -- # sync 00:07:56.347 07:13:57 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:56.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:56.348 07:13:57 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:56.348 07:13:57 -- common/autotest_common.sh@1198 -- # local i=0 00:07:56.348 07:13:57 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:07:56.348 07:13:57 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:56.348 07:13:57 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:56.348 07:13:57 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:56.348 07:13:57 -- common/autotest_common.sh@1210 -- # return 0 00:07:56.348 07:13:57 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:56.348 07:13:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:56.348 07:13:57 -- common/autotest_common.sh@10 -- # set +x 00:07:56.348 07:13:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:56.348 07:13:57 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:56.348 07:13:57 -- target/filesystem.sh@101 -- # killprocess 72391 00:07:56.348 07:13:57 -- common/autotest_common.sh@926 -- # '[' -z 72391 ']' 00:07:56.348 07:13:57 -- common/autotest_common.sh@930 -- # kill -0 72391 00:07:56.348 07:13:57 -- common/autotest_common.sh@931 -- # uname 00:07:56.348 07:13:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:56.348 07:13:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72391 00:07:56.348 killing process with pid 72391 00:07:56.348 07:13:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:56.348 07:13:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:56.348 07:13:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72391' 00:07:56.348 07:13:57 -- common/autotest_common.sh@945 -- # kill 72391 00:07:56.348 07:13:57 -- common/autotest_common.sh@950 -- # wait 72391 00:07:56.634 07:13:58 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:56.634 00:07:56.634 real 0m14.499s 00:07:56.634 user 0m56.085s 00:07:56.634 sys 0m1.600s 00:07:56.634 07:13:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.634 07:13:58 -- common/autotest_common.sh@10 -- # set +x 00:07:56.634 ************************************ 00:07:56.634 END TEST nvmf_filesystem_no_in_capsule 00:07:56.634 ************************************ 00:07:56.634 07:13:58 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:56.634 07:13:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:56.634 07:13:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:56.634 07:13:58 -- common/autotest_common.sh@10 -- # set +x 00:07:56.634 ************************************ 00:07:56.634 START TEST nvmf_filesystem_in_capsule 00:07:56.634 ************************************ 00:07:56.634 07:13:58 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:07:56.634 07:13:58 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:56.634 07:13:58 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:56.634 07:13:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:56.634 07:13:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:56.634 07:13:58 -- common/autotest_common.sh@10 -- # set +x 00:07:56.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.634 07:13:58 -- nvmf/common.sh@469 -- # nvmfpid=72759 00:07:56.634 07:13:58 -- nvmf/common.sh@470 -- # waitforlisten 72759 00:07:56.634 07:13:58 -- common/autotest_common.sh@819 -- # '[' -z 72759 ']' 00:07:56.634 07:13:58 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:56.634 07:13:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.634 07:13:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:56.634 07:13:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.634 07:13:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:56.634 07:13:58 -- common/autotest_common.sh@10 -- # set +x 00:07:56.634 [2024-11-04 07:13:58.405946] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:56.634 [2024-11-04 07:13:58.406197] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.892 [2024-11-04 07:13:58.547622] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:56.893 [2024-11-04 07:13:58.612543] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:56.893 [2024-11-04 07:13:58.612843] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:56.893 [2024-11-04 07:13:58.612865] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:56.893 [2024-11-04 07:13:58.612905] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:56.893 [2024-11-04 07:13:58.612972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.893 [2024-11-04 07:13:58.613074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:56.893 [2024-11-04 07:13:58.613554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:56.893 [2024-11-04 07:13:58.613610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.829 07:13:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:57.829 07:13:59 -- common/autotest_common.sh@852 -- # return 0 00:07:57.829 07:13:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:57.829 07:13:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:57.829 07:13:59 -- common/autotest_common.sh@10 -- # set +x 00:07:57.829 07:13:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:57.829 07:13:59 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:57.829 07:13:59 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:57.829 07:13:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.829 07:13:59 -- common/autotest_common.sh@10 -- # set +x 00:07:57.829 [2024-11-04 07:13:59.373601] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:57.829 07:13:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.829 07:13:59 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:57.829 07:13:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.829 07:13:59 -- common/autotest_common.sh@10 -- # set +x 00:07:57.829 Malloc1 00:07:57.829 07:13:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.829 07:13:59 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:57.829 07:13:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.829 07:13:59 -- common/autotest_common.sh@10 -- # set +x 00:07:57.829 07:13:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.829 07:13:59 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:57.829 07:13:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.830 07:13:59 -- common/autotest_common.sh@10 -- # set +x 00:07:57.830 07:13:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.830 07:13:59 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:57.830 07:13:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.830 07:13:59 -- common/autotest_common.sh@10 -- # set +x 00:07:57.830 [2024-11-04 07:13:59.555647] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:57.830 07:13:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.830 07:13:59 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:57.830 07:13:59 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:07:57.830 07:13:59 -- common/autotest_common.sh@1358 -- # local bdev_info 00:07:57.830 07:13:59 -- common/autotest_common.sh@1359 -- # local bs 00:07:57.830 07:13:59 -- common/autotest_common.sh@1360 -- # local nb 00:07:57.830 07:13:59 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:57.830 07:13:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.830 07:13:59 -- common/autotest_common.sh@10 -- # set +x 00:07:57.830 07:13:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.830 07:13:59 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:07:57.830 { 00:07:57.830 "aliases": [ 00:07:57.830 "9de3f642-da4b-420e-abfc-bfa3564fd6a4" 00:07:57.830 ], 00:07:57.830 "assigned_rate_limits": { 00:07:57.830 "r_mbytes_per_sec": 0, 00:07:57.830 "rw_ios_per_sec": 0, 00:07:57.830 "rw_mbytes_per_sec": 0, 00:07:57.830 "w_mbytes_per_sec": 0 00:07:57.830 }, 00:07:57.830 "block_size": 512, 00:07:57.830 "claim_type": "exclusive_write", 00:07:57.830 "claimed": true, 00:07:57.830 "driver_specific": {}, 00:07:57.830 "memory_domains": [ 00:07:57.830 { 00:07:57.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.830 "dma_device_type": 2 00:07:57.830 } 00:07:57.830 ], 00:07:57.830 "name": "Malloc1", 00:07:57.830 "num_blocks": 1048576, 00:07:57.830 "product_name": "Malloc disk", 00:07:57.830 "supported_io_types": { 00:07:57.830 "abort": true, 00:07:57.830 "compare": false, 00:07:57.830 "compare_and_write": false, 00:07:57.830 "flush": true, 00:07:57.830 "nvme_admin": false, 00:07:57.830 "nvme_io": false, 00:07:57.830 "read": true, 00:07:57.830 "reset": true, 00:07:57.830 "unmap": true, 00:07:57.830 "write": true, 00:07:57.830 "write_zeroes": true 00:07:57.830 }, 00:07:57.830 "uuid": "9de3f642-da4b-420e-abfc-bfa3564fd6a4", 00:07:57.830 "zoned": false 00:07:57.830 } 00:07:57.830 ]' 00:07:57.830 07:13:59 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:07:57.830 07:13:59 -- common/autotest_common.sh@1362 -- # bs=512 00:07:57.830 07:13:59 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:07:58.088 07:13:59 -- common/autotest_common.sh@1363 -- # nb=1048576 00:07:58.088 07:13:59 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:07:58.088 07:13:59 -- common/autotest_common.sh@1367 -- # echo 512 00:07:58.088 07:13:59 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:58.088 07:13:59 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:58.088 07:13:59 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:58.088 07:13:59 -- common/autotest_common.sh@1177 -- # local i=0 00:07:58.088 07:13:59 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:07:58.088 07:13:59 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:07:58.088 07:13:59 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:00.620 07:14:01 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:00.620 07:14:01 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:00.620 07:14:01 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:00.620 07:14:01 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:00.620 07:14:01 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:00.620 07:14:01 -- common/autotest_common.sh@1187 -- # return 0 00:08:00.620 07:14:01 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:00.620 07:14:01 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:00.620 07:14:01 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:00.620 07:14:01 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:00.620 07:14:01 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:00.620 07:14:01 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:00.620 07:14:01 -- setup/common.sh@80 -- # echo 536870912 00:08:00.620 07:14:01 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:00.621 07:14:01 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:00.621 07:14:01 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:00.621 07:14:01 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:00.621 07:14:01 -- target/filesystem.sh@69 -- # partprobe 00:08:00.621 07:14:02 -- target/filesystem.sh@70 -- # sleep 1 00:08:01.186 07:14:03 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:01.186 07:14:03 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:01.186 07:14:03 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:01.186 07:14:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:01.186 07:14:03 -- common/autotest_common.sh@10 -- # set +x 00:08:01.186 ************************************ 00:08:01.186 START TEST filesystem_in_capsule_ext4 00:08:01.186 ************************************ 00:08:01.186 07:14:03 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:01.186 07:14:03 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:01.186 07:14:03 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:01.186 07:14:03 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:01.186 07:14:03 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:01.186 07:14:03 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:01.186 07:14:03 -- common/autotest_common.sh@904 -- # local i=0 00:08:01.186 07:14:03 -- common/autotest_common.sh@905 -- # local force 00:08:01.186 07:14:03 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:01.186 07:14:03 -- common/autotest_common.sh@908 -- # force=-F 00:08:01.186 07:14:03 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:01.444 mke2fs 1.47.0 (5-Feb-2023) 00:08:01.444 Discarding device blocks: 0/522240 done 00:08:01.444 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:01.444 Filesystem UUID: d6121362-a3a0-419f-a36f-3b40c5343fd6 00:08:01.444 Superblock backups stored on blocks: 00:08:01.444 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:01.444 00:08:01.444 Allocating group tables: 0/64 done 00:08:01.444 Writing inode tables: 0/64 done 00:08:01.444 Creating journal (8192 blocks): done 00:08:01.444 Writing superblocks and filesystem accounting information: 0/64 done 00:08:01.444 00:08:01.444 07:14:03 -- common/autotest_common.sh@921 -- # return 0 00:08:01.444 07:14:03 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:08.006 07:14:08 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:08.006 07:14:08 -- target/filesystem.sh@25 -- # sync 00:08:08.006 07:14:08 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:08.006 07:14:08 -- target/filesystem.sh@27 -- # sync 00:08:08.006 07:14:08 -- target/filesystem.sh@29 -- # i=0 00:08:08.006 07:14:08 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:08.006 07:14:08 -- target/filesystem.sh@37 -- # kill -0 72759 00:08:08.006 07:14:08 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:08.006 07:14:08 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:08.006 07:14:08 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:08.006 07:14:08 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:08.006 ************************************ 00:08:08.006 END TEST filesystem_in_capsule_ext4 00:08:08.006 ************************************ 00:08:08.006 00:08:08.006 real 0m5.745s 00:08:08.006 user 0m0.030s 00:08:08.006 sys 0m0.064s 00:08:08.006 07:14:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.006 07:14:08 -- common/autotest_common.sh@10 -- # set +x 00:08:08.006 07:14:08 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:08.006 07:14:08 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:08.006 07:14:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:08.006 07:14:08 -- common/autotest_common.sh@10 -- # set +x 00:08:08.006 ************************************ 00:08:08.006 START TEST filesystem_in_capsule_btrfs 00:08:08.006 ************************************ 00:08:08.006 07:14:08 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:08.006 07:14:08 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:08.006 07:14:08 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:08.006 07:14:08 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:08.006 07:14:08 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:08.006 07:14:08 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:08.006 07:14:08 -- common/autotest_common.sh@904 -- # local i=0 00:08:08.006 07:14:08 -- common/autotest_common.sh@905 -- # local force 00:08:08.006 07:14:08 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:08.006 07:14:08 -- common/autotest_common.sh@910 -- # force=-f 00:08:08.006 07:14:08 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:08.006 btrfs-progs v6.8.1 00:08:08.006 See https://btrfs.readthedocs.io for more information. 00:08:08.006 00:08:08.006 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:08.006 NOTE: several default settings have changed in version 5.15, please make sure 00:08:08.006 this does not affect your deployments: 00:08:08.006 - DUP for metadata (-m dup) 00:08:08.006 - enabled no-holes (-O no-holes) 00:08:08.006 - enabled free-space-tree (-R free-space-tree) 00:08:08.006 00:08:08.006 Label: (null) 00:08:08.006 UUID: 9a70f8c0-dcf9-431c-a759-44fb62b15da1 00:08:08.006 Node size: 16384 00:08:08.006 Sector size: 4096 (CPU page size: 4096) 00:08:08.006 Filesystem size: 510.00MiB 00:08:08.006 Block group profiles: 00:08:08.006 Data: single 8.00MiB 00:08:08.006 Metadata: DUP 32.00MiB 00:08:08.006 System: DUP 8.00MiB 00:08:08.006 SSD detected: yes 00:08:08.006 Zoned device: no 00:08:08.006 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:08.006 Checksum: crc32c 00:08:08.006 Number of devices: 1 00:08:08.006 Devices: 00:08:08.006 ID SIZE PATH 00:08:08.006 1 510.00MiB /dev/nvme0n1p1 00:08:08.006 00:08:08.006 07:14:08 -- common/autotest_common.sh@921 -- # return 0 00:08:08.006 07:14:08 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:08.006 07:14:09 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:08.006 07:14:09 -- target/filesystem.sh@25 -- # sync 00:08:08.006 07:14:09 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:08.006 07:14:09 -- target/filesystem.sh@27 -- # sync 00:08:08.006 07:14:09 -- target/filesystem.sh@29 -- # i=0 00:08:08.006 07:14:09 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:08.006 07:14:09 -- target/filesystem.sh@37 -- # kill -0 72759 00:08:08.006 07:14:09 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:08.006 07:14:09 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:08.006 07:14:09 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:08.006 07:14:09 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:08.006 00:08:08.006 real 0m0.359s 00:08:08.006 user 0m0.023s 00:08:08.006 sys 0m0.067s 00:08:08.006 07:14:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.006 07:14:09 -- common/autotest_common.sh@10 -- # set +x 00:08:08.006 ************************************ 00:08:08.006 END TEST filesystem_in_capsule_btrfs 00:08:08.006 ************************************ 00:08:08.007 07:14:09 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:08.007 07:14:09 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:08.007 07:14:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:08.007 07:14:09 -- common/autotest_common.sh@10 -- # set +x 00:08:08.007 ************************************ 00:08:08.007 START TEST filesystem_in_capsule_xfs 00:08:08.007 ************************************ 00:08:08.007 07:14:09 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:08.007 07:14:09 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:08.007 07:14:09 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:08.007 07:14:09 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:08.007 07:14:09 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:08.007 07:14:09 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:08.007 07:14:09 -- common/autotest_common.sh@904 -- # local i=0 00:08:08.007 07:14:09 -- common/autotest_common.sh@905 -- # local force 00:08:08.007 07:14:09 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:08.007 07:14:09 -- common/autotest_common.sh@910 -- # force=-f 00:08:08.007 07:14:09 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:08.007 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:08.007 = sectsz=512 attr=2, projid32bit=1 00:08:08.007 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:08.007 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:08.007 data = bsize=4096 blocks=130560, imaxpct=25 00:08:08.007 = sunit=0 swidth=0 blks 00:08:08.007 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:08.007 log =internal log bsize=4096 blocks=16384, version=2 00:08:08.007 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:08.007 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:08.265 Discarding blocks...Done. 00:08:08.265 07:14:10 -- common/autotest_common.sh@921 -- # return 0 00:08:08.265 07:14:10 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:10.169 07:14:11 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:10.169 07:14:11 -- target/filesystem.sh@25 -- # sync 00:08:10.169 07:14:11 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:10.169 07:14:11 -- target/filesystem.sh@27 -- # sync 00:08:10.169 07:14:11 -- target/filesystem.sh@29 -- # i=0 00:08:10.169 07:14:11 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:10.169 07:14:11 -- target/filesystem.sh@37 -- # kill -0 72759 00:08:10.169 07:14:11 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:10.169 07:14:11 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:10.169 07:14:11 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:10.169 07:14:11 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:10.169 ************************************ 00:08:10.169 END TEST filesystem_in_capsule_xfs 00:08:10.169 ************************************ 00:08:10.169 00:08:10.169 real 0m2.676s 00:08:10.169 user 0m0.021s 00:08:10.169 sys 0m0.061s 00:08:10.169 07:14:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.169 07:14:11 -- common/autotest_common.sh@10 -- # set +x 00:08:10.169 07:14:11 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:10.169 07:14:11 -- target/filesystem.sh@93 -- # sync 00:08:10.169 07:14:11 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:10.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:10.428 07:14:12 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:10.428 07:14:12 -- common/autotest_common.sh@1198 -- # local i=0 00:08:10.428 07:14:12 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:10.428 07:14:12 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:10.428 07:14:12 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:10.428 07:14:12 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:10.428 07:14:12 -- common/autotest_common.sh@1210 -- # return 0 00:08:10.428 07:14:12 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:10.428 07:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:10.428 07:14:12 -- common/autotest_common.sh@10 -- # set +x 00:08:10.428 07:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:10.428 07:14:12 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:10.428 07:14:12 -- target/filesystem.sh@101 -- # killprocess 72759 00:08:10.428 07:14:12 -- common/autotest_common.sh@926 -- # '[' -z 72759 ']' 00:08:10.428 07:14:12 -- common/autotest_common.sh@930 -- # kill -0 72759 00:08:10.428 07:14:12 -- common/autotest_common.sh@931 -- # uname 00:08:10.428 07:14:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:10.428 07:14:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72759 00:08:10.428 killing process with pid 72759 00:08:10.428 07:14:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:10.428 07:14:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:10.428 07:14:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72759' 00:08:10.428 07:14:12 -- common/autotest_common.sh@945 -- # kill 72759 00:08:10.428 07:14:12 -- common/autotest_common.sh@950 -- # wait 72759 00:08:10.687 07:14:12 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:10.687 00:08:10.687 real 0m14.165s 00:08:10.687 user 0m54.821s 00:08:10.687 sys 0m1.530s 00:08:10.687 07:14:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.687 ************************************ 00:08:10.687 END TEST nvmf_filesystem_in_capsule 00:08:10.687 07:14:12 -- common/autotest_common.sh@10 -- # set +x 00:08:10.687 ************************************ 00:08:10.945 07:14:12 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:10.945 07:14:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:10.945 07:14:12 -- nvmf/common.sh@116 -- # sync 00:08:10.945 07:14:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:10.945 07:14:12 -- nvmf/common.sh@119 -- # set +e 00:08:10.945 07:14:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:10.945 07:14:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:10.945 rmmod nvme_tcp 00:08:10.945 rmmod nvme_fabrics 00:08:10.945 rmmod nvme_keyring 00:08:10.945 07:14:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:10.945 07:14:12 -- nvmf/common.sh@123 -- # set -e 00:08:10.945 07:14:12 -- nvmf/common.sh@124 -- # return 0 00:08:10.945 07:14:12 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:10.945 07:14:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:10.945 07:14:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:10.945 07:14:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:10.945 07:14:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:10.945 07:14:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:10.945 07:14:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.945 07:14:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:10.945 07:14:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.945 07:14:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:10.945 00:08:10.945 real 0m29.472s 00:08:10.946 user 1m51.126s 00:08:10.946 sys 0m3.526s 00:08:10.946 07:14:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.946 07:14:12 -- common/autotest_common.sh@10 -- # set +x 00:08:10.946 ************************************ 00:08:10.946 END TEST nvmf_filesystem 00:08:10.946 ************************************ 00:08:10.946 07:14:12 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:10.946 07:14:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:10.946 07:14:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:10.946 07:14:12 -- common/autotest_common.sh@10 -- # set +x 00:08:10.946 ************************************ 00:08:10.946 START TEST nvmf_discovery 00:08:10.946 ************************************ 00:08:10.946 07:14:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:11.205 * Looking for test storage... 00:08:11.205 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:11.205 07:14:12 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:11.205 07:14:12 -- nvmf/common.sh@7 -- # uname -s 00:08:11.205 07:14:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.205 07:14:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.205 07:14:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.205 07:14:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.205 07:14:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.205 07:14:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.205 07:14:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.205 07:14:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.205 07:14:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.205 07:14:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.205 07:14:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:08:11.205 07:14:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:08:11.205 07:14:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.205 07:14:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.205 07:14:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:11.205 07:14:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:11.205 07:14:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.205 07:14:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.205 07:14:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.205 07:14:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.205 07:14:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.205 07:14:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.205 07:14:12 -- paths/export.sh@5 -- # export PATH 00:08:11.205 07:14:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.205 07:14:12 -- nvmf/common.sh@46 -- # : 0 00:08:11.205 07:14:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:11.205 07:14:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:11.205 07:14:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:11.205 07:14:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.205 07:14:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.205 07:14:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:11.205 07:14:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:11.205 07:14:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:11.205 07:14:12 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:11.205 07:14:12 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:11.205 07:14:12 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:11.205 07:14:12 -- target/discovery.sh@15 -- # hash nvme 00:08:11.205 07:14:12 -- target/discovery.sh@20 -- # nvmftestinit 00:08:11.205 07:14:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:11.205 07:14:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:11.205 07:14:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:11.205 07:14:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:11.205 07:14:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:11.205 07:14:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.205 07:14:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:11.205 07:14:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.205 07:14:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:11.205 07:14:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:11.205 07:14:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:11.205 07:14:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:11.205 07:14:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:11.205 07:14:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:11.205 07:14:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:11.205 07:14:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:11.205 07:14:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:11.205 07:14:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:11.205 07:14:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:11.205 07:14:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:11.205 07:14:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:11.205 07:14:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:11.205 07:14:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:11.205 07:14:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:11.205 07:14:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:11.205 07:14:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:11.205 07:14:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:11.205 07:14:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:11.205 Cannot find device "nvmf_tgt_br" 00:08:11.205 07:14:12 -- nvmf/common.sh@154 -- # true 00:08:11.205 07:14:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:11.205 Cannot find device "nvmf_tgt_br2" 00:08:11.205 07:14:12 -- nvmf/common.sh@155 -- # true 00:08:11.205 07:14:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:11.205 07:14:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:11.205 Cannot find device "nvmf_tgt_br" 00:08:11.206 07:14:12 -- nvmf/common.sh@157 -- # true 00:08:11.206 07:14:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:11.206 Cannot find device "nvmf_tgt_br2" 00:08:11.206 07:14:12 -- nvmf/common.sh@158 -- # true 00:08:11.206 07:14:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:11.206 07:14:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:11.206 07:14:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:11.206 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:11.206 07:14:12 -- nvmf/common.sh@161 -- # true 00:08:11.206 07:14:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:11.206 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:11.206 07:14:12 -- nvmf/common.sh@162 -- # true 00:08:11.206 07:14:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:11.206 07:14:13 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:11.206 07:14:13 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:11.206 07:14:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:11.206 07:14:13 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:11.206 07:14:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:11.464 07:14:13 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:11.465 07:14:13 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:11.465 07:14:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:11.465 07:14:13 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:11.465 07:14:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:11.465 07:14:13 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:11.465 07:14:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:11.465 07:14:13 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:11.465 07:14:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:11.465 07:14:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:11.465 07:14:13 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:11.465 07:14:13 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:11.465 07:14:13 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:11.465 07:14:13 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:11.465 07:14:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:11.465 07:14:13 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:11.465 07:14:13 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:11.465 07:14:13 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:11.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:11.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:08:11.465 00:08:11.465 --- 10.0.0.2 ping statistics --- 00:08:11.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.465 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:08:11.465 07:14:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:11.465 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:11.465 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:08:11.465 00:08:11.465 --- 10.0.0.3 ping statistics --- 00:08:11.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.465 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:08:11.465 07:14:13 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:11.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:11.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:08:11.465 00:08:11.465 --- 10.0.0.1 ping statistics --- 00:08:11.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.465 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:08:11.465 07:14:13 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:11.465 07:14:13 -- nvmf/common.sh@421 -- # return 0 00:08:11.465 07:14:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:11.465 07:14:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:11.465 07:14:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:11.465 07:14:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:11.465 07:14:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:11.465 07:14:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:11.465 07:14:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:11.465 07:14:13 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:11.465 07:14:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:11.465 07:14:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:11.465 07:14:13 -- common/autotest_common.sh@10 -- # set +x 00:08:11.465 07:14:13 -- nvmf/common.sh@469 -- # nvmfpid=73299 00:08:11.465 07:14:13 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:11.465 07:14:13 -- nvmf/common.sh@470 -- # waitforlisten 73299 00:08:11.465 07:14:13 -- common/autotest_common.sh@819 -- # '[' -z 73299 ']' 00:08:11.465 07:14:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.465 07:14:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:11.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.465 07:14:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.465 07:14:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:11.465 07:14:13 -- common/autotest_common.sh@10 -- # set +x 00:08:11.465 [2024-11-04 07:14:13.259221] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:11.465 [2024-11-04 07:14:13.259303] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:11.723 [2024-11-04 07:14:13.395203] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:11.723 [2024-11-04 07:14:13.475303] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:11.723 [2024-11-04 07:14:13.475446] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:11.723 [2024-11-04 07:14:13.475461] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:11.723 [2024-11-04 07:14:13.475470] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:11.723 [2024-11-04 07:14:13.475628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.723 [2024-11-04 07:14:13.475758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:11.723 [2024-11-04 07:14:13.475905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.723 [2024-11-04 07:14:13.475923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:12.660 07:14:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:12.660 07:14:14 -- common/autotest_common.sh@852 -- # return 0 00:08:12.660 07:14:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:12.660 07:14:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:12.660 07:14:14 -- common/autotest_common.sh@10 -- # set +x 00:08:12.660 07:14:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:12.660 07:14:14 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:12.660 07:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.660 07:14:14 -- common/autotest_common.sh@10 -- # set +x 00:08:12.660 [2024-11-04 07:14:14.311980] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.660 07:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.660 07:14:14 -- target/discovery.sh@26 -- # seq 1 4 00:08:12.660 07:14:14 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:12.660 07:14:14 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:12.660 07:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.660 07:14:14 -- common/autotest_common.sh@10 -- # set +x 00:08:12.660 Null1 00:08:12.660 07:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.660 07:14:14 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:12.660 07:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.660 07:14:14 -- common/autotest_common.sh@10 -- # set +x 00:08:12.660 07:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.660 07:14:14 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:12.660 07:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.660 07:14:14 -- common/autotest_common.sh@10 -- # set +x 00:08:12.660 07:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.660 07:14:14 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:12.660 07:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.660 07:14:14 -- common/autotest_common.sh@10 -- # set +x 00:08:12.660 [2024-11-04 07:14:14.372115] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.660 07:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.660 07:14:14 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:12.660 07:14:14 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:12.660 07:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.660 07:14:14 -- common/autotest_common.sh@10 -- # set +x 00:08:12.660 Null2 00:08:12.660 07:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.660 07:14:14 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:12.660 07:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.660 07:14:14 -- common/autotest_common.sh@10 -- # set +x 00:08:12.660 07:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.660 07:14:14 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:12.660 07:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.660 07:14:14 -- common/autotest_common.sh@10 -- # set +x 00:08:12.660 07:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.660 07:14:14 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:12.660 07:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.660 07:14:14 -- common/autotest_common.sh@10 -- # set +x 00:08:12.660 07:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.660 07:14:14 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:12.660 07:14:14 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:12.660 07:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.660 07:14:14 -- common/autotest_common.sh@10 -- # set +x 00:08:12.660 Null3 00:08:12.660 07:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.660 07:14:14 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:12.660 07:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.660 07:14:14 -- common/autotest_common.sh@10 -- # set +x 00:08:12.660 07:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.660 07:14:14 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:12.660 07:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.660 07:14:14 -- common/autotest_common.sh@10 -- # set +x 00:08:12.660 07:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.660 07:14:14 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:12.660 07:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.660 07:14:14 -- common/autotest_common.sh@10 -- # set +x 00:08:12.660 07:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.660 07:14:14 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:12.660 07:14:14 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:12.660 07:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.660 07:14:14 -- common/autotest_common.sh@10 -- # set +x 00:08:12.660 Null4 00:08:12.660 07:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.660 07:14:14 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:12.660 07:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.660 07:14:14 -- common/autotest_common.sh@10 -- # set +x 00:08:12.660 07:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.660 07:14:14 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:12.660 07:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.660 07:14:14 -- common/autotest_common.sh@10 -- # set +x 00:08:12.660 07:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.660 07:14:14 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:12.660 07:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.660 07:14:14 -- common/autotest_common.sh@10 -- # set +x 00:08:12.660 07:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.660 07:14:14 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:12.660 07:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.660 07:14:14 -- common/autotest_common.sh@10 -- # set +x 00:08:12.660 07:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.660 07:14:14 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:12.660 07:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.660 07:14:14 -- common/autotest_common.sh@10 -- # set +x 00:08:12.660 07:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.660 07:14:14 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -a 10.0.0.2 -s 4420 00:08:12.920 00:08:12.920 Discovery Log Number of Records 6, Generation counter 6 00:08:12.920 =====Discovery Log Entry 0====== 00:08:12.920 trtype: tcp 00:08:12.920 adrfam: ipv4 00:08:12.920 subtype: current discovery subsystem 00:08:12.920 treq: not required 00:08:12.920 portid: 0 00:08:12.920 trsvcid: 4420 00:08:12.920 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:12.920 traddr: 10.0.0.2 00:08:12.920 eflags: explicit discovery connections, duplicate discovery information 00:08:12.920 sectype: none 00:08:12.920 =====Discovery Log Entry 1====== 00:08:12.920 trtype: tcp 00:08:12.920 adrfam: ipv4 00:08:12.920 subtype: nvme subsystem 00:08:12.920 treq: not required 00:08:12.920 portid: 0 00:08:12.920 trsvcid: 4420 00:08:12.920 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:12.920 traddr: 10.0.0.2 00:08:12.920 eflags: none 00:08:12.920 sectype: none 00:08:12.920 =====Discovery Log Entry 2====== 00:08:12.920 trtype: tcp 00:08:12.920 adrfam: ipv4 00:08:12.920 subtype: nvme subsystem 00:08:12.920 treq: not required 00:08:12.920 portid: 0 00:08:12.920 trsvcid: 4420 00:08:12.920 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:12.920 traddr: 10.0.0.2 00:08:12.920 eflags: none 00:08:12.920 sectype: none 00:08:12.920 =====Discovery Log Entry 3====== 00:08:12.920 trtype: tcp 00:08:12.920 adrfam: ipv4 00:08:12.920 subtype: nvme subsystem 00:08:12.920 treq: not required 00:08:12.920 portid: 0 00:08:12.920 trsvcid: 4420 00:08:12.920 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:12.920 traddr: 10.0.0.2 00:08:12.920 eflags: none 00:08:12.920 sectype: none 00:08:12.920 =====Discovery Log Entry 4====== 00:08:12.920 trtype: tcp 00:08:12.920 adrfam: ipv4 00:08:12.920 subtype: nvme subsystem 00:08:12.920 treq: not required 00:08:12.920 portid: 0 00:08:12.920 trsvcid: 4420 00:08:12.920 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:12.920 traddr: 10.0.0.2 00:08:12.920 eflags: none 00:08:12.920 sectype: none 00:08:12.920 =====Discovery Log Entry 5====== 00:08:12.920 trtype: tcp 00:08:12.920 adrfam: ipv4 00:08:12.920 subtype: discovery subsystem referral 00:08:12.920 treq: not required 00:08:12.920 portid: 0 00:08:12.920 trsvcid: 4430 00:08:12.920 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:12.920 traddr: 10.0.0.2 00:08:12.920 eflags: none 00:08:12.920 sectype: none 00:08:12.920 Perform nvmf subsystem discovery via RPC 00:08:12.920 07:14:14 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:12.920 07:14:14 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:12.920 07:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.920 07:14:14 -- common/autotest_common.sh@10 -- # set +x 00:08:12.920 [2024-11-04 07:14:14.608295] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:12.920 [ 00:08:12.920 { 00:08:12.920 "allow_any_host": true, 00:08:12.920 "hosts": [], 00:08:12.920 "listen_addresses": [ 00:08:12.920 { 00:08:12.920 "adrfam": "IPv4", 00:08:12.920 "traddr": "10.0.0.2", 00:08:12.920 "transport": "TCP", 00:08:12.920 "trsvcid": "4420", 00:08:12.920 "trtype": "TCP" 00:08:12.920 } 00:08:12.920 ], 00:08:12.920 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:12.920 "subtype": "Discovery" 00:08:12.920 }, 00:08:12.920 { 00:08:12.920 "allow_any_host": true, 00:08:12.920 "hosts": [], 00:08:12.920 "listen_addresses": [ 00:08:12.920 { 00:08:12.920 "adrfam": "IPv4", 00:08:12.920 "traddr": "10.0.0.2", 00:08:12.920 "transport": "TCP", 00:08:12.920 "trsvcid": "4420", 00:08:12.920 "trtype": "TCP" 00:08:12.920 } 00:08:12.920 ], 00:08:12.920 "max_cntlid": 65519, 00:08:12.920 "max_namespaces": 32, 00:08:12.920 "min_cntlid": 1, 00:08:12.920 "model_number": "SPDK bdev Controller", 00:08:12.920 "namespaces": [ 00:08:12.920 { 00:08:12.920 "bdev_name": "Null1", 00:08:12.920 "name": "Null1", 00:08:12.920 "nguid": "5453EEE55882459D88603A90B207E9B2", 00:08:12.920 "nsid": 1, 00:08:12.920 "uuid": "5453eee5-5882-459d-8860-3a90b207e9b2" 00:08:12.920 } 00:08:12.920 ], 00:08:12.920 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:12.920 "serial_number": "SPDK00000000000001", 00:08:12.920 "subtype": "NVMe" 00:08:12.920 }, 00:08:12.920 { 00:08:12.920 "allow_any_host": true, 00:08:12.920 "hosts": [], 00:08:12.920 "listen_addresses": [ 00:08:12.920 { 00:08:12.920 "adrfam": "IPv4", 00:08:12.920 "traddr": "10.0.0.2", 00:08:12.920 "transport": "TCP", 00:08:12.920 "trsvcid": "4420", 00:08:12.920 "trtype": "TCP" 00:08:12.920 } 00:08:12.920 ], 00:08:12.920 "max_cntlid": 65519, 00:08:12.920 "max_namespaces": 32, 00:08:12.920 "min_cntlid": 1, 00:08:12.920 "model_number": "SPDK bdev Controller", 00:08:12.920 "namespaces": [ 00:08:12.920 { 00:08:12.920 "bdev_name": "Null2", 00:08:12.920 "name": "Null2", 00:08:12.920 "nguid": "FE523A00D60A4D258B3F453A95BECC5D", 00:08:12.920 "nsid": 1, 00:08:12.920 "uuid": "fe523a00-d60a-4d25-8b3f-453a95becc5d" 00:08:12.920 } 00:08:12.920 ], 00:08:12.920 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:12.920 "serial_number": "SPDK00000000000002", 00:08:12.920 "subtype": "NVMe" 00:08:12.920 }, 00:08:12.920 { 00:08:12.920 "allow_any_host": true, 00:08:12.920 "hosts": [], 00:08:12.920 "listen_addresses": [ 00:08:12.920 { 00:08:12.920 "adrfam": "IPv4", 00:08:12.920 "traddr": "10.0.0.2", 00:08:12.920 "transport": "TCP", 00:08:12.920 "trsvcid": "4420", 00:08:12.920 "trtype": "TCP" 00:08:12.920 } 00:08:12.920 ], 00:08:12.920 "max_cntlid": 65519, 00:08:12.920 "max_namespaces": 32, 00:08:12.920 "min_cntlid": 1, 00:08:12.920 "model_number": "SPDK bdev Controller", 00:08:12.920 "namespaces": [ 00:08:12.920 { 00:08:12.920 "bdev_name": "Null3", 00:08:12.920 "name": "Null3", 00:08:12.920 "nguid": "58DBAD94CD294D8299587418DB85533F", 00:08:12.920 "nsid": 1, 00:08:12.920 "uuid": "58dbad94-cd29-4d82-9958-7418db85533f" 00:08:12.920 } 00:08:12.920 ], 00:08:12.920 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:12.920 "serial_number": "SPDK00000000000003", 00:08:12.920 "subtype": "NVMe" 00:08:12.920 }, 00:08:12.920 { 00:08:12.920 "allow_any_host": true, 00:08:12.920 "hosts": [], 00:08:12.920 "listen_addresses": [ 00:08:12.920 { 00:08:12.920 "adrfam": "IPv4", 00:08:12.920 "traddr": "10.0.0.2", 00:08:12.920 "transport": "TCP", 00:08:12.920 "trsvcid": "4420", 00:08:12.920 "trtype": "TCP" 00:08:12.920 } 00:08:12.920 ], 00:08:12.920 "max_cntlid": 65519, 00:08:12.920 "max_namespaces": 32, 00:08:12.920 "min_cntlid": 1, 00:08:12.920 "model_number": "SPDK bdev Controller", 00:08:12.920 "namespaces": [ 00:08:12.920 { 00:08:12.920 "bdev_name": "Null4", 00:08:12.920 "name": "Null4", 00:08:12.920 "nguid": "CA436BB0ED5B4F199EB02857482E5677", 00:08:12.920 "nsid": 1, 00:08:12.920 "uuid": "ca436bb0-ed5b-4f19-9eb0-2857482e5677" 00:08:12.920 } 00:08:12.920 ], 00:08:12.920 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:12.920 "serial_number": "SPDK00000000000004", 00:08:12.920 "subtype": "NVMe" 00:08:12.920 } 00:08:12.920 ] 00:08:12.920 07:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.920 07:14:14 -- target/discovery.sh@42 -- # seq 1 4 00:08:12.920 07:14:14 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:12.921 07:14:14 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:12.921 07:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.921 07:14:14 -- common/autotest_common.sh@10 -- # set +x 00:08:12.921 07:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.921 07:14:14 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:12.921 07:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.921 07:14:14 -- common/autotest_common.sh@10 -- # set +x 00:08:12.921 07:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.921 07:14:14 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:12.921 07:14:14 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:12.921 07:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.921 07:14:14 -- common/autotest_common.sh@10 -- # set +x 00:08:12.921 07:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.921 07:14:14 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:12.921 07:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.921 07:14:14 -- common/autotest_common.sh@10 -- # set +x 00:08:12.921 07:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.921 07:14:14 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:12.921 07:14:14 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:12.921 07:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.921 07:14:14 -- common/autotest_common.sh@10 -- # set +x 00:08:12.921 07:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.921 07:14:14 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:12.921 07:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.921 07:14:14 -- common/autotest_common.sh@10 -- # set +x 00:08:12.921 07:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.921 07:14:14 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:12.921 07:14:14 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:12.921 07:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.921 07:14:14 -- common/autotest_common.sh@10 -- # set +x 00:08:12.921 07:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.921 07:14:14 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:12.921 07:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.921 07:14:14 -- common/autotest_common.sh@10 -- # set +x 00:08:12.921 07:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.921 07:14:14 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:12.921 07:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.921 07:14:14 -- common/autotest_common.sh@10 -- # set +x 00:08:12.921 07:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.921 07:14:14 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:12.921 07:14:14 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:12.921 07:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.921 07:14:14 -- common/autotest_common.sh@10 -- # set +x 00:08:12.921 07:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:13.179 07:14:14 -- target/discovery.sh@49 -- # check_bdevs= 00:08:13.179 07:14:14 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:13.179 07:14:14 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:13.179 07:14:14 -- target/discovery.sh@57 -- # nvmftestfini 00:08:13.179 07:14:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:13.179 07:14:14 -- nvmf/common.sh@116 -- # sync 00:08:13.179 07:14:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:13.179 07:14:14 -- nvmf/common.sh@119 -- # set +e 00:08:13.179 07:14:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:13.179 07:14:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:13.179 rmmod nvme_tcp 00:08:13.179 rmmod nvme_fabrics 00:08:13.179 rmmod nvme_keyring 00:08:13.179 07:14:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:13.180 07:14:14 -- nvmf/common.sh@123 -- # set -e 00:08:13.180 07:14:14 -- nvmf/common.sh@124 -- # return 0 00:08:13.180 07:14:14 -- nvmf/common.sh@477 -- # '[' -n 73299 ']' 00:08:13.180 07:14:14 -- nvmf/common.sh@478 -- # killprocess 73299 00:08:13.180 07:14:14 -- common/autotest_common.sh@926 -- # '[' -z 73299 ']' 00:08:13.180 07:14:14 -- common/autotest_common.sh@930 -- # kill -0 73299 00:08:13.180 07:14:14 -- common/autotest_common.sh@931 -- # uname 00:08:13.180 07:14:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:13.180 07:14:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73299 00:08:13.180 07:14:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:13.180 07:14:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:13.180 killing process with pid 73299 00:08:13.180 07:14:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73299' 00:08:13.180 07:14:14 -- common/autotest_common.sh@945 -- # kill 73299 00:08:13.180 [2024-11-04 07:14:14.884646] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:13.180 07:14:14 -- common/autotest_common.sh@950 -- # wait 73299 00:08:13.438 07:14:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:13.438 07:14:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:13.438 07:14:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:13.438 07:14:15 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:13.438 07:14:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:13.438 07:14:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.438 07:14:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:13.438 07:14:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.438 07:14:15 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:13.438 00:08:13.438 real 0m2.417s 00:08:13.438 user 0m6.869s 00:08:13.438 sys 0m0.661s 00:08:13.438 07:14:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.438 ************************************ 00:08:13.438 END TEST nvmf_discovery 00:08:13.438 ************************************ 00:08:13.438 07:14:15 -- common/autotest_common.sh@10 -- # set +x 00:08:13.438 07:14:15 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:13.438 07:14:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:13.438 07:14:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:13.438 07:14:15 -- common/autotest_common.sh@10 -- # set +x 00:08:13.438 ************************************ 00:08:13.438 START TEST nvmf_referrals 00:08:13.438 ************************************ 00:08:13.438 07:14:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:13.697 * Looking for test storage... 00:08:13.697 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:13.697 07:14:15 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:13.697 07:14:15 -- nvmf/common.sh@7 -- # uname -s 00:08:13.697 07:14:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:13.697 07:14:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:13.697 07:14:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:13.697 07:14:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:13.697 07:14:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:13.697 07:14:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:13.697 07:14:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:13.697 07:14:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:13.697 07:14:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:13.697 07:14:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:13.697 07:14:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:08:13.697 07:14:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:08:13.697 07:14:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:13.697 07:14:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:13.697 07:14:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:13.697 07:14:15 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:13.697 07:14:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.697 07:14:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.697 07:14:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.697 07:14:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.698 07:14:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.698 07:14:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.698 07:14:15 -- paths/export.sh@5 -- # export PATH 00:08:13.698 07:14:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.698 07:14:15 -- nvmf/common.sh@46 -- # : 0 00:08:13.698 07:14:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:13.698 07:14:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:13.698 07:14:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:13.698 07:14:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:13.698 07:14:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:13.698 07:14:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:13.698 07:14:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:13.698 07:14:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:13.698 07:14:15 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:13.698 07:14:15 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:13.698 07:14:15 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:13.698 07:14:15 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:13.698 07:14:15 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:13.698 07:14:15 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:13.698 07:14:15 -- target/referrals.sh@37 -- # nvmftestinit 00:08:13.698 07:14:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:13.698 07:14:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:13.698 07:14:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:13.698 07:14:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:13.698 07:14:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:13.698 07:14:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.698 07:14:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:13.698 07:14:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.698 07:14:15 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:13.698 07:14:15 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:13.698 07:14:15 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:13.698 07:14:15 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:13.698 07:14:15 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:13.698 07:14:15 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:13.698 07:14:15 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:13.698 07:14:15 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:13.698 07:14:15 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:13.698 07:14:15 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:13.698 07:14:15 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:13.698 07:14:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:13.698 07:14:15 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:13.698 07:14:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:13.698 07:14:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:13.698 07:14:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:13.698 07:14:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:13.698 07:14:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:13.698 07:14:15 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:13.698 07:14:15 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:13.698 Cannot find device "nvmf_tgt_br" 00:08:13.698 07:14:15 -- nvmf/common.sh@154 -- # true 00:08:13.698 07:14:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:13.698 Cannot find device "nvmf_tgt_br2" 00:08:13.698 07:14:15 -- nvmf/common.sh@155 -- # true 00:08:13.698 07:14:15 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:13.698 07:14:15 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:13.698 Cannot find device "nvmf_tgt_br" 00:08:13.698 07:14:15 -- nvmf/common.sh@157 -- # true 00:08:13.698 07:14:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:13.698 Cannot find device "nvmf_tgt_br2" 00:08:13.698 07:14:15 -- nvmf/common.sh@158 -- # true 00:08:13.698 07:14:15 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:13.698 07:14:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:13.698 07:14:15 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:13.698 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:13.698 07:14:15 -- nvmf/common.sh@161 -- # true 00:08:13.698 07:14:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:13.698 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:13.698 07:14:15 -- nvmf/common.sh@162 -- # true 00:08:13.698 07:14:15 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:13.698 07:14:15 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:13.698 07:14:15 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:13.698 07:14:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:13.698 07:14:15 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:13.698 07:14:15 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:13.698 07:14:15 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:13.698 07:14:15 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:13.698 07:14:15 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:13.698 07:14:15 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:13.698 07:14:15 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:13.957 07:14:15 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:13.957 07:14:15 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:13.957 07:14:15 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:13.957 07:14:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:13.957 07:14:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:13.957 07:14:15 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:13.957 07:14:15 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:13.957 07:14:15 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:13.957 07:14:15 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:13.957 07:14:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:13.957 07:14:15 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:13.957 07:14:15 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:13.957 07:14:15 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:13.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:13.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:08:13.957 00:08:13.957 --- 10.0.0.2 ping statistics --- 00:08:13.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.957 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:08:13.957 07:14:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:13.957 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:13.957 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.121 ms 00:08:13.957 00:08:13.957 --- 10.0.0.3 ping statistics --- 00:08:13.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.957 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:08:13.957 07:14:15 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:13.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:13.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:08:13.957 00:08:13.957 --- 10.0.0.1 ping statistics --- 00:08:13.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.957 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:08:13.957 07:14:15 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:13.957 07:14:15 -- nvmf/common.sh@421 -- # return 0 00:08:13.957 07:14:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:13.957 07:14:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:13.957 07:14:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:13.957 07:14:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:13.957 07:14:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:13.957 07:14:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:13.957 07:14:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:13.957 07:14:15 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:13.957 07:14:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:13.958 07:14:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:13.958 07:14:15 -- common/autotest_common.sh@10 -- # set +x 00:08:13.958 07:14:15 -- nvmf/common.sh@469 -- # nvmfpid=73526 00:08:13.958 07:14:15 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:13.958 07:14:15 -- nvmf/common.sh@470 -- # waitforlisten 73526 00:08:13.958 07:14:15 -- common/autotest_common.sh@819 -- # '[' -z 73526 ']' 00:08:13.958 07:14:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.958 07:14:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:13.958 07:14:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.958 07:14:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:13.958 07:14:15 -- common/autotest_common.sh@10 -- # set +x 00:08:13.958 [2024-11-04 07:14:15.713126] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:13.958 [2024-11-04 07:14:15.713204] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.216 [2024-11-04 07:14:15.852718] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:14.216 [2024-11-04 07:14:15.930322] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:14.216 [2024-11-04 07:14:15.930479] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:14.216 [2024-11-04 07:14:15.930494] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:14.216 [2024-11-04 07:14:15.930502] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:14.216 [2024-11-04 07:14:15.930668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.217 [2024-11-04 07:14:15.930819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:14.217 [2024-11-04 07:14:15.931401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:14.217 [2024-11-04 07:14:15.931452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.153 07:14:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:15.153 07:14:16 -- common/autotest_common.sh@852 -- # return 0 00:08:15.153 07:14:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:15.153 07:14:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:15.153 07:14:16 -- common/autotest_common.sh@10 -- # set +x 00:08:15.153 07:14:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:15.153 07:14:16 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:15.153 07:14:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.153 07:14:16 -- common/autotest_common.sh@10 -- # set +x 00:08:15.153 [2024-11-04 07:14:16.775162] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:15.153 07:14:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.153 07:14:16 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:15.153 07:14:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.153 07:14:16 -- common/autotest_common.sh@10 -- # set +x 00:08:15.153 [2024-11-04 07:14:16.803813] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:15.153 07:14:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.153 07:14:16 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:15.153 07:14:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.153 07:14:16 -- common/autotest_common.sh@10 -- # set +x 00:08:15.153 07:14:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.153 07:14:16 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:15.153 07:14:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.153 07:14:16 -- common/autotest_common.sh@10 -- # set +x 00:08:15.153 07:14:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.153 07:14:16 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:15.153 07:14:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.153 07:14:16 -- common/autotest_common.sh@10 -- # set +x 00:08:15.153 07:14:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.153 07:14:16 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:15.153 07:14:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.153 07:14:16 -- target/referrals.sh@48 -- # jq length 00:08:15.153 07:14:16 -- common/autotest_common.sh@10 -- # set +x 00:08:15.153 07:14:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.153 07:14:16 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:15.153 07:14:16 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:15.153 07:14:16 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:15.153 07:14:16 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:15.153 07:14:16 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:15.153 07:14:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.153 07:14:16 -- target/referrals.sh@21 -- # sort 00:08:15.153 07:14:16 -- common/autotest_common.sh@10 -- # set +x 00:08:15.153 07:14:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.153 07:14:16 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:15.153 07:14:16 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:15.153 07:14:16 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:15.153 07:14:16 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:15.153 07:14:16 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:15.153 07:14:16 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:15.153 07:14:16 -- target/referrals.sh@26 -- # sort 00:08:15.153 07:14:16 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:15.412 07:14:17 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:15.412 07:14:17 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:15.412 07:14:17 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:15.412 07:14:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.412 07:14:17 -- common/autotest_common.sh@10 -- # set +x 00:08:15.412 07:14:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.412 07:14:17 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:15.412 07:14:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.412 07:14:17 -- common/autotest_common.sh@10 -- # set +x 00:08:15.412 07:14:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.412 07:14:17 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:15.412 07:14:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.412 07:14:17 -- common/autotest_common.sh@10 -- # set +x 00:08:15.412 07:14:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.412 07:14:17 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:15.412 07:14:17 -- target/referrals.sh@56 -- # jq length 00:08:15.412 07:14:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.412 07:14:17 -- common/autotest_common.sh@10 -- # set +x 00:08:15.412 07:14:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.412 07:14:17 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:15.412 07:14:17 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:15.412 07:14:17 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:15.412 07:14:17 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:15.412 07:14:17 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:15.412 07:14:17 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:15.412 07:14:17 -- target/referrals.sh@26 -- # sort 00:08:15.671 07:14:17 -- target/referrals.sh@26 -- # echo 00:08:15.671 07:14:17 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:15.671 07:14:17 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:15.671 07:14:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.671 07:14:17 -- common/autotest_common.sh@10 -- # set +x 00:08:15.671 07:14:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.671 07:14:17 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:15.671 07:14:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.671 07:14:17 -- common/autotest_common.sh@10 -- # set +x 00:08:15.671 07:14:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.671 07:14:17 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:15.671 07:14:17 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:15.671 07:14:17 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:15.671 07:14:17 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:15.671 07:14:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.671 07:14:17 -- common/autotest_common.sh@10 -- # set +x 00:08:15.671 07:14:17 -- target/referrals.sh@21 -- # sort 00:08:15.671 07:14:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.671 07:14:17 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:15.671 07:14:17 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:15.671 07:14:17 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:15.671 07:14:17 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:15.671 07:14:17 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:15.671 07:14:17 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:15.671 07:14:17 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:15.671 07:14:17 -- target/referrals.sh@26 -- # sort 00:08:15.671 07:14:17 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:15.671 07:14:17 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:15.671 07:14:17 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:15.671 07:14:17 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:15.671 07:14:17 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:15.671 07:14:17 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:15.671 07:14:17 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:15.930 07:14:17 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:15.930 07:14:17 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:15.930 07:14:17 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:15.930 07:14:17 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:15.930 07:14:17 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:15.930 07:14:17 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:15.930 07:14:17 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:15.930 07:14:17 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:15.930 07:14:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.930 07:14:17 -- common/autotest_common.sh@10 -- # set +x 00:08:15.930 07:14:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.930 07:14:17 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:15.930 07:14:17 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:15.930 07:14:17 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:15.930 07:14:17 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:15.930 07:14:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.930 07:14:17 -- common/autotest_common.sh@10 -- # set +x 00:08:15.930 07:14:17 -- target/referrals.sh@21 -- # sort 00:08:15.930 07:14:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:16.200 07:14:17 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:16.200 07:14:17 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:16.200 07:14:17 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:16.200 07:14:17 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:16.200 07:14:17 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:16.200 07:14:17 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:16.200 07:14:17 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:16.200 07:14:17 -- target/referrals.sh@26 -- # sort 00:08:16.200 07:14:17 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:16.200 07:14:17 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:16.200 07:14:17 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:16.200 07:14:17 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:16.200 07:14:17 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:16.200 07:14:17 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:16.200 07:14:17 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:16.200 07:14:18 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:16.200 07:14:18 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:16.200 07:14:18 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:16.200 07:14:18 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:16.200 07:14:18 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:16.200 07:14:18 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:16.474 07:14:18 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:16.474 07:14:18 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:16.474 07:14:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:16.474 07:14:18 -- common/autotest_common.sh@10 -- # set +x 00:08:16.474 07:14:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:16.474 07:14:18 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:16.474 07:14:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:16.474 07:14:18 -- common/autotest_common.sh@10 -- # set +x 00:08:16.474 07:14:18 -- target/referrals.sh@82 -- # jq length 00:08:16.474 07:14:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:16.474 07:14:18 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:16.474 07:14:18 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:16.474 07:14:18 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:16.474 07:14:18 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:16.474 07:14:18 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:16.474 07:14:18 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:16.474 07:14:18 -- target/referrals.sh@26 -- # sort 00:08:16.733 07:14:18 -- target/referrals.sh@26 -- # echo 00:08:16.733 07:14:18 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:16.733 07:14:18 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:16.733 07:14:18 -- target/referrals.sh@86 -- # nvmftestfini 00:08:16.733 07:14:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:16.733 07:14:18 -- nvmf/common.sh@116 -- # sync 00:08:16.733 07:14:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:16.733 07:14:18 -- nvmf/common.sh@119 -- # set +e 00:08:16.733 07:14:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:16.733 07:14:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:16.733 rmmod nvme_tcp 00:08:16.733 rmmod nvme_fabrics 00:08:16.733 rmmod nvme_keyring 00:08:16.733 07:14:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:16.733 07:14:18 -- nvmf/common.sh@123 -- # set -e 00:08:16.733 07:14:18 -- nvmf/common.sh@124 -- # return 0 00:08:16.733 07:14:18 -- nvmf/common.sh@477 -- # '[' -n 73526 ']' 00:08:16.733 07:14:18 -- nvmf/common.sh@478 -- # killprocess 73526 00:08:16.733 07:14:18 -- common/autotest_common.sh@926 -- # '[' -z 73526 ']' 00:08:16.733 07:14:18 -- common/autotest_common.sh@930 -- # kill -0 73526 00:08:16.733 07:14:18 -- common/autotest_common.sh@931 -- # uname 00:08:16.733 07:14:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:16.733 07:14:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73526 00:08:16.733 killing process with pid 73526 00:08:16.733 07:14:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:16.733 07:14:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:16.733 07:14:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73526' 00:08:16.733 07:14:18 -- common/autotest_common.sh@945 -- # kill 73526 00:08:16.733 07:14:18 -- common/autotest_common.sh@950 -- # wait 73526 00:08:16.992 07:14:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:16.992 07:14:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:16.992 07:14:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:16.992 07:14:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:16.992 07:14:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:16.992 07:14:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.992 07:14:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:16.992 07:14:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.992 07:14:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:16.992 00:08:16.992 real 0m3.570s 00:08:16.992 user 0m12.219s 00:08:16.992 sys 0m0.886s 00:08:16.992 ************************************ 00:08:16.992 END TEST nvmf_referrals 00:08:16.992 ************************************ 00:08:16.992 07:14:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.992 07:14:18 -- common/autotest_common.sh@10 -- # set +x 00:08:17.251 07:14:18 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:17.251 07:14:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:17.251 07:14:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:17.251 07:14:18 -- common/autotest_common.sh@10 -- # set +x 00:08:17.251 ************************************ 00:08:17.251 START TEST nvmf_connect_disconnect 00:08:17.251 ************************************ 00:08:17.251 07:14:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:17.251 * Looking for test storage... 00:08:17.251 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:17.251 07:14:18 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:17.251 07:14:18 -- nvmf/common.sh@7 -- # uname -s 00:08:17.251 07:14:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:17.251 07:14:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:17.251 07:14:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:17.251 07:14:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:17.251 07:14:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:17.251 07:14:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:17.251 07:14:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:17.251 07:14:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:17.251 07:14:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:17.251 07:14:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:17.251 07:14:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:08:17.251 07:14:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:08:17.251 07:14:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:17.251 07:14:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:17.251 07:14:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:17.251 07:14:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:17.251 07:14:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.251 07:14:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.251 07:14:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.252 07:14:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.252 07:14:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.252 07:14:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.252 07:14:18 -- paths/export.sh@5 -- # export PATH 00:08:17.252 07:14:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.252 07:14:18 -- nvmf/common.sh@46 -- # : 0 00:08:17.252 07:14:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:17.252 07:14:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:17.252 07:14:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:17.252 07:14:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:17.252 07:14:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:17.252 07:14:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:17.252 07:14:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:17.252 07:14:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:17.252 07:14:18 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:17.252 07:14:18 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:17.252 07:14:18 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:17.252 07:14:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:17.252 07:14:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:17.252 07:14:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:17.252 07:14:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:17.252 07:14:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:17.252 07:14:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.252 07:14:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:17.252 07:14:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.252 07:14:18 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:17.252 07:14:18 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:17.252 07:14:18 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:17.252 07:14:18 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:17.252 07:14:18 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:17.252 07:14:18 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:17.252 07:14:18 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:17.252 07:14:18 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:17.252 07:14:18 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:17.252 07:14:18 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:17.252 07:14:18 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:17.252 07:14:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:17.252 07:14:18 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:17.252 07:14:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:17.252 07:14:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:17.252 07:14:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:17.252 07:14:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:17.252 07:14:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:17.252 07:14:18 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:17.252 07:14:18 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:17.252 Cannot find device "nvmf_tgt_br" 00:08:17.252 07:14:18 -- nvmf/common.sh@154 -- # true 00:08:17.252 07:14:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:17.252 Cannot find device "nvmf_tgt_br2" 00:08:17.252 07:14:19 -- nvmf/common.sh@155 -- # true 00:08:17.252 07:14:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:17.252 07:14:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:17.252 Cannot find device "nvmf_tgt_br" 00:08:17.252 07:14:19 -- nvmf/common.sh@157 -- # true 00:08:17.252 07:14:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:17.252 Cannot find device "nvmf_tgt_br2" 00:08:17.252 07:14:19 -- nvmf/common.sh@158 -- # true 00:08:17.252 07:14:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:17.252 07:14:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:17.252 07:14:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:17.252 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:17.252 07:14:19 -- nvmf/common.sh@161 -- # true 00:08:17.252 07:14:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:17.252 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:17.252 07:14:19 -- nvmf/common.sh@162 -- # true 00:08:17.252 07:14:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:17.510 07:14:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:17.511 07:14:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:17.511 07:14:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:17.511 07:14:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:17.511 07:14:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:17.511 07:14:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:17.511 07:14:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:17.511 07:14:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:17.511 07:14:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:17.511 07:14:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:17.511 07:14:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:17.511 07:14:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:17.511 07:14:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:17.511 07:14:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:17.511 07:14:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:17.511 07:14:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:17.511 07:14:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:17.511 07:14:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:17.511 07:14:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:17.511 07:14:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:17.511 07:14:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:17.511 07:14:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:17.511 07:14:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:17.511 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:17.511 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:08:17.511 00:08:17.511 --- 10.0.0.2 ping statistics --- 00:08:17.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.511 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:08:17.511 07:14:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:17.511 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:17.511 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:08:17.511 00:08:17.511 --- 10.0.0.3 ping statistics --- 00:08:17.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.511 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:08:17.511 07:14:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:17.511 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:17.511 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:08:17.511 00:08:17.511 --- 10.0.0.1 ping statistics --- 00:08:17.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.511 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:08:17.511 07:14:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:17.511 07:14:19 -- nvmf/common.sh@421 -- # return 0 00:08:17.511 07:14:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:17.511 07:14:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:17.511 07:14:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:17.511 07:14:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:17.511 07:14:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:17.511 07:14:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:17.511 07:14:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:17.511 07:14:19 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:17.511 07:14:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:17.511 07:14:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:17.511 07:14:19 -- common/autotest_common.sh@10 -- # set +x 00:08:17.511 07:14:19 -- nvmf/common.sh@469 -- # nvmfpid=73834 00:08:17.511 07:14:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:17.511 07:14:19 -- nvmf/common.sh@470 -- # waitforlisten 73834 00:08:17.511 07:14:19 -- common/autotest_common.sh@819 -- # '[' -z 73834 ']' 00:08:17.511 07:14:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.511 07:14:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:17.511 07:14:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.511 07:14:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:17.511 07:14:19 -- common/autotest_common.sh@10 -- # set +x 00:08:17.770 [2024-11-04 07:14:19.383609] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:17.770 [2024-11-04 07:14:19.383697] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.770 [2024-11-04 07:14:19.523772] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:17.770 [2024-11-04 07:14:19.585700] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:17.770 [2024-11-04 07:14:19.585852] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:17.770 [2024-11-04 07:14:19.585865] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:17.770 [2024-11-04 07:14:19.585874] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:17.770 [2024-11-04 07:14:19.586071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.770 [2024-11-04 07:14:19.586337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:17.770 [2024-11-04 07:14:19.587024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:17.770 [2024-11-04 07:14:19.587030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.706 07:14:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:18.706 07:14:20 -- common/autotest_common.sh@852 -- # return 0 00:08:18.706 07:14:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:18.706 07:14:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:18.706 07:14:20 -- common/autotest_common.sh@10 -- # set +x 00:08:18.706 07:14:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:18.706 07:14:20 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:18.706 07:14:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:18.706 07:14:20 -- common/autotest_common.sh@10 -- # set +x 00:08:18.706 [2024-11-04 07:14:20.439365] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:18.706 07:14:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:18.706 07:14:20 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:18.706 07:14:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:18.706 07:14:20 -- common/autotest_common.sh@10 -- # set +x 00:08:18.706 07:14:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:18.706 07:14:20 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:18.707 07:14:20 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:18.707 07:14:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:18.707 07:14:20 -- common/autotest_common.sh@10 -- # set +x 00:08:18.707 07:14:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:18.707 07:14:20 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:18.707 07:14:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:18.707 07:14:20 -- common/autotest_common.sh@10 -- # set +x 00:08:18.707 07:14:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:18.707 07:14:20 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:18.707 07:14:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:18.707 07:14:20 -- common/autotest_common.sh@10 -- # set +x 00:08:18.707 [2024-11-04 07:14:20.506052] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:18.707 07:14:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:18.707 07:14:20 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:18.707 07:14:20 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:18.707 07:14:20 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:18.707 07:14:20 -- target/connect_disconnect.sh@34 -- # set +x 00:08:21.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:23.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:25.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:32.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.021 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.424 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.956 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.391 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.858 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.292 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.259 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.597 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.372 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.438 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.908 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.810 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.537 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.069 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.495 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.812 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.712 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.211 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.050 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.021 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.488 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.408 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.813 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.248 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.720 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.685 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.685 07:18:05 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:04.685 07:18:05 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:04.685 07:18:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:04.685 07:18:05 -- nvmf/common.sh@116 -- # sync 00:12:04.685 07:18:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:04.685 07:18:06 -- nvmf/common.sh@119 -- # set +e 00:12:04.685 07:18:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:04.685 07:18:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:04.685 rmmod nvme_tcp 00:12:04.685 rmmod nvme_fabrics 00:12:04.685 rmmod nvme_keyring 00:12:04.685 07:18:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:04.685 07:18:06 -- nvmf/common.sh@123 -- # set -e 00:12:04.685 07:18:06 -- nvmf/common.sh@124 -- # return 0 00:12:04.685 07:18:06 -- nvmf/common.sh@477 -- # '[' -n 73834 ']' 00:12:04.685 07:18:06 -- nvmf/common.sh@478 -- # killprocess 73834 00:12:04.685 07:18:06 -- common/autotest_common.sh@926 -- # '[' -z 73834 ']' 00:12:04.685 07:18:06 -- common/autotest_common.sh@930 -- # kill -0 73834 00:12:04.685 07:18:06 -- common/autotest_common.sh@931 -- # uname 00:12:04.685 07:18:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:04.685 07:18:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73834 00:12:04.685 killing process with pid 73834 00:12:04.685 07:18:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:04.685 07:18:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:04.685 07:18:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73834' 00:12:04.685 07:18:06 -- common/autotest_common.sh@945 -- # kill 73834 00:12:04.685 07:18:06 -- common/autotest_common.sh@950 -- # wait 73834 00:12:04.685 07:18:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:04.685 07:18:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:04.685 07:18:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:04.685 07:18:06 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:04.685 07:18:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:04.685 07:18:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.685 07:18:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:04.685 07:18:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.685 07:18:06 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:04.685 00:12:04.685 real 3m47.509s 00:12:04.685 user 14m51.019s 00:12:04.685 sys 0m18.134s 00:12:04.685 07:18:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:04.685 07:18:06 -- common/autotest_common.sh@10 -- # set +x 00:12:04.685 ************************************ 00:12:04.685 END TEST nvmf_connect_disconnect 00:12:04.685 ************************************ 00:12:04.685 07:18:06 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:04.685 07:18:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:04.685 07:18:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:04.685 07:18:06 -- common/autotest_common.sh@10 -- # set +x 00:12:04.685 ************************************ 00:12:04.685 START TEST nvmf_multitarget 00:12:04.685 ************************************ 00:12:04.685 07:18:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:04.685 * Looking for test storage... 00:12:04.685 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:04.685 07:18:06 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:04.685 07:18:06 -- nvmf/common.sh@7 -- # uname -s 00:12:04.685 07:18:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:04.685 07:18:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:04.685 07:18:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:04.685 07:18:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:04.685 07:18:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:04.685 07:18:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:04.685 07:18:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:04.685 07:18:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:04.685 07:18:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:04.685 07:18:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:04.685 07:18:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:12:04.685 07:18:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:12:04.685 07:18:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:04.685 07:18:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:04.685 07:18:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:04.685 07:18:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:04.685 07:18:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:04.685 07:18:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:04.685 07:18:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:04.685 07:18:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.685 07:18:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.685 07:18:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.685 07:18:06 -- paths/export.sh@5 -- # export PATH 00:12:04.685 07:18:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.685 07:18:06 -- nvmf/common.sh@46 -- # : 0 00:12:04.685 07:18:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:04.685 07:18:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:04.685 07:18:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:04.685 07:18:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:04.685 07:18:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:04.685 07:18:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:04.685 07:18:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:04.685 07:18:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:04.685 07:18:06 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:04.944 07:18:06 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:04.944 07:18:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:04.944 07:18:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:04.944 07:18:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:04.944 07:18:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:04.944 07:18:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:04.944 07:18:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.944 07:18:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:04.944 07:18:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.944 07:18:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:04.944 07:18:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:04.944 07:18:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:04.944 07:18:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:04.944 07:18:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:04.944 07:18:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:04.944 07:18:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:04.944 07:18:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:04.944 07:18:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:04.944 07:18:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:04.944 07:18:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:04.944 07:18:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:04.944 07:18:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:04.944 07:18:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:04.944 07:18:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:04.944 07:18:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:04.944 07:18:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:04.944 07:18:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:04.944 07:18:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:04.944 07:18:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:04.944 Cannot find device "nvmf_tgt_br" 00:12:04.944 07:18:06 -- nvmf/common.sh@154 -- # true 00:12:04.944 07:18:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:04.944 Cannot find device "nvmf_tgt_br2" 00:12:04.944 07:18:06 -- nvmf/common.sh@155 -- # true 00:12:04.944 07:18:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:04.944 07:18:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:04.944 Cannot find device "nvmf_tgt_br" 00:12:04.944 07:18:06 -- nvmf/common.sh@157 -- # true 00:12:04.944 07:18:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:04.944 Cannot find device "nvmf_tgt_br2" 00:12:04.944 07:18:06 -- nvmf/common.sh@158 -- # true 00:12:04.944 07:18:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:04.944 07:18:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:04.944 07:18:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:04.944 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:04.944 07:18:06 -- nvmf/common.sh@161 -- # true 00:12:04.944 07:18:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:04.944 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:04.944 07:18:06 -- nvmf/common.sh@162 -- # true 00:12:04.944 07:18:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:04.944 07:18:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:04.944 07:18:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:04.944 07:18:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:04.944 07:18:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:04.944 07:18:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:04.944 07:18:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:04.945 07:18:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:04.945 07:18:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:04.945 07:18:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:04.945 07:18:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:04.945 07:18:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:04.945 07:18:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:04.945 07:18:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:04.945 07:18:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:04.945 07:18:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:04.945 07:18:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:04.945 07:18:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:04.945 07:18:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:05.203 07:18:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:05.203 07:18:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:05.203 07:18:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:05.203 07:18:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:05.203 07:18:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:05.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:05.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:12:05.203 00:12:05.203 --- 10.0.0.2 ping statistics --- 00:12:05.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.203 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:12:05.203 07:18:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:05.203 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:05.203 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:12:05.203 00:12:05.203 --- 10.0.0.3 ping statistics --- 00:12:05.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.203 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:12:05.203 07:18:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:05.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:05.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:12:05.203 00:12:05.203 --- 10.0.0.1 ping statistics --- 00:12:05.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.203 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:12:05.203 07:18:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:05.203 07:18:06 -- nvmf/common.sh@421 -- # return 0 00:12:05.203 07:18:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:05.203 07:18:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:05.203 07:18:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:05.203 07:18:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:05.203 07:18:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:05.203 07:18:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:05.203 07:18:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:05.203 07:18:06 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:05.204 07:18:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:05.204 07:18:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:05.204 07:18:06 -- common/autotest_common.sh@10 -- # set +x 00:12:05.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.204 07:18:06 -- nvmf/common.sh@469 -- # nvmfpid=77641 00:12:05.204 07:18:06 -- nvmf/common.sh@470 -- # waitforlisten 77641 00:12:05.204 07:18:06 -- common/autotest_common.sh@819 -- # '[' -z 77641 ']' 00:12:05.204 07:18:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.204 07:18:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:05.204 07:18:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.204 07:18:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:05.204 07:18:06 -- common/autotest_common.sh@10 -- # set +x 00:12:05.204 07:18:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:05.204 [2024-11-04 07:18:06.935699] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:12:05.204 [2024-11-04 07:18:06.935795] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.464 [2024-11-04 07:18:07.073259] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:05.464 [2024-11-04 07:18:07.140990] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:05.464 [2024-11-04 07:18:07.141274] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.464 [2024-11-04 07:18:07.141321] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.464 [2024-11-04 07:18:07.141346] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.464 [2024-11-04 07:18:07.141580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.464 [2024-11-04 07:18:07.141908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.464 [2024-11-04 07:18:07.142693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:05.464 [2024-11-04 07:18:07.142738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.400 07:18:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:06.400 07:18:07 -- common/autotest_common.sh@852 -- # return 0 00:12:06.400 07:18:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:06.400 07:18:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:06.400 07:18:07 -- common/autotest_common.sh@10 -- # set +x 00:12:06.400 07:18:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:06.400 07:18:07 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:06.400 07:18:07 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:06.400 07:18:07 -- target/multitarget.sh@21 -- # jq length 00:12:06.400 07:18:08 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:06.400 07:18:08 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:06.400 "nvmf_tgt_1" 00:12:06.400 07:18:08 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:06.659 "nvmf_tgt_2" 00:12:06.659 07:18:08 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:06.659 07:18:08 -- target/multitarget.sh@28 -- # jq length 00:12:06.917 07:18:08 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:06.917 07:18:08 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:06.917 true 00:12:06.917 07:18:08 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:07.175 true 00:12:07.175 07:18:08 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:07.175 07:18:08 -- target/multitarget.sh@35 -- # jq length 00:12:07.175 07:18:08 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:07.175 07:18:08 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:07.175 07:18:08 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:07.175 07:18:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:07.175 07:18:08 -- nvmf/common.sh@116 -- # sync 00:12:07.175 07:18:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:07.175 07:18:08 -- nvmf/common.sh@119 -- # set +e 00:12:07.175 07:18:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:07.175 07:18:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:07.175 rmmod nvme_tcp 00:12:07.175 rmmod nvme_fabrics 00:12:07.433 rmmod nvme_keyring 00:12:07.433 07:18:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:07.433 07:18:09 -- nvmf/common.sh@123 -- # set -e 00:12:07.433 07:18:09 -- nvmf/common.sh@124 -- # return 0 00:12:07.433 07:18:09 -- nvmf/common.sh@477 -- # '[' -n 77641 ']' 00:12:07.433 07:18:09 -- nvmf/common.sh@478 -- # killprocess 77641 00:12:07.433 07:18:09 -- common/autotest_common.sh@926 -- # '[' -z 77641 ']' 00:12:07.433 07:18:09 -- common/autotest_common.sh@930 -- # kill -0 77641 00:12:07.433 07:18:09 -- common/autotest_common.sh@931 -- # uname 00:12:07.433 07:18:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:07.433 07:18:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77641 00:12:07.433 killing process with pid 77641 00:12:07.433 07:18:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:07.433 07:18:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:07.433 07:18:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77641' 00:12:07.433 07:18:09 -- common/autotest_common.sh@945 -- # kill 77641 00:12:07.433 07:18:09 -- common/autotest_common.sh@950 -- # wait 77641 00:12:07.691 07:18:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:07.691 07:18:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:07.691 07:18:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:07.691 07:18:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:07.691 07:18:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:07.691 07:18:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.691 07:18:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:07.691 07:18:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.691 07:18:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:07.691 ************************************ 00:12:07.691 END TEST nvmf_multitarget 00:12:07.691 ************************************ 00:12:07.691 00:12:07.691 real 0m2.997s 00:12:07.691 user 0m9.979s 00:12:07.691 sys 0m0.714s 00:12:07.691 07:18:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:07.691 07:18:09 -- common/autotest_common.sh@10 -- # set +x 00:12:07.691 07:18:09 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:07.691 07:18:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:07.691 07:18:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:07.691 07:18:09 -- common/autotest_common.sh@10 -- # set +x 00:12:07.691 ************************************ 00:12:07.691 START TEST nvmf_rpc 00:12:07.691 ************************************ 00:12:07.691 07:18:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:07.949 * Looking for test storage... 00:12:07.949 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:07.949 07:18:09 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:07.949 07:18:09 -- nvmf/common.sh@7 -- # uname -s 00:12:07.949 07:18:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:07.949 07:18:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:07.949 07:18:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:07.949 07:18:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:07.949 07:18:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:07.949 07:18:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:07.949 07:18:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:07.949 07:18:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:07.949 07:18:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:07.949 07:18:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:07.949 07:18:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:12:07.949 07:18:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:12:07.949 07:18:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:07.949 07:18:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:07.949 07:18:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:07.949 07:18:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:07.949 07:18:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:07.949 07:18:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:07.949 07:18:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:07.949 07:18:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.949 07:18:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.950 07:18:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.950 07:18:09 -- paths/export.sh@5 -- # export PATH 00:12:07.950 07:18:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.950 07:18:09 -- nvmf/common.sh@46 -- # : 0 00:12:07.950 07:18:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:07.950 07:18:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:07.950 07:18:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:07.950 07:18:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:07.950 07:18:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:07.950 07:18:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:07.950 07:18:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:07.950 07:18:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:07.950 07:18:09 -- target/rpc.sh@11 -- # loops=5 00:12:07.950 07:18:09 -- target/rpc.sh@23 -- # nvmftestinit 00:12:07.950 07:18:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:07.950 07:18:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:07.950 07:18:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:07.950 07:18:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:07.950 07:18:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:07.950 07:18:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.950 07:18:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:07.950 07:18:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.950 07:18:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:07.950 07:18:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:07.950 07:18:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:07.950 07:18:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:07.950 07:18:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:07.950 07:18:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:07.950 07:18:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:07.950 07:18:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:07.950 07:18:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:07.950 07:18:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:07.950 07:18:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:07.950 07:18:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:07.950 07:18:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:07.950 07:18:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:07.950 07:18:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:07.950 07:18:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:07.950 07:18:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:07.950 07:18:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:07.950 07:18:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:07.950 07:18:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:07.950 Cannot find device "nvmf_tgt_br" 00:12:07.950 07:18:09 -- nvmf/common.sh@154 -- # true 00:12:07.950 07:18:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:07.950 Cannot find device "nvmf_tgt_br2" 00:12:07.950 07:18:09 -- nvmf/common.sh@155 -- # true 00:12:07.950 07:18:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:07.950 07:18:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:07.950 Cannot find device "nvmf_tgt_br" 00:12:07.950 07:18:09 -- nvmf/common.sh@157 -- # true 00:12:07.950 07:18:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:07.950 Cannot find device "nvmf_tgt_br2" 00:12:07.950 07:18:09 -- nvmf/common.sh@158 -- # true 00:12:07.950 07:18:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:07.950 07:18:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:07.950 07:18:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:07.950 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:07.950 07:18:09 -- nvmf/common.sh@161 -- # true 00:12:07.950 07:18:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:07.950 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:07.950 07:18:09 -- nvmf/common.sh@162 -- # true 00:12:07.950 07:18:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:07.950 07:18:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:07.950 07:18:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:07.950 07:18:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:07.950 07:18:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:08.208 07:18:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:08.208 07:18:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:08.208 07:18:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:08.208 07:18:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:08.208 07:18:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:08.208 07:18:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:08.208 07:18:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:08.208 07:18:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:08.208 07:18:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:08.208 07:18:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:08.208 07:18:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:08.208 07:18:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:08.208 07:18:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:08.208 07:18:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:08.208 07:18:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:08.208 07:18:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:08.208 07:18:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:08.208 07:18:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:08.208 07:18:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:08.208 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:08.208 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:12:08.208 00:12:08.208 --- 10.0.0.2 ping statistics --- 00:12:08.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.208 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:12:08.208 07:18:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:08.208 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:08.208 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:12:08.208 00:12:08.208 --- 10.0.0.3 ping statistics --- 00:12:08.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.208 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:12:08.208 07:18:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:08.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:08.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:12:08.208 00:12:08.208 --- 10.0.0.1 ping statistics --- 00:12:08.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.208 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:12:08.208 07:18:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:08.208 07:18:09 -- nvmf/common.sh@421 -- # return 0 00:12:08.208 07:18:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:08.208 07:18:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:08.208 07:18:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:08.208 07:18:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:08.208 07:18:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:08.208 07:18:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:08.208 07:18:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:08.208 07:18:09 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:08.208 07:18:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:08.208 07:18:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:08.208 07:18:09 -- common/autotest_common.sh@10 -- # set +x 00:12:08.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.208 07:18:09 -- nvmf/common.sh@469 -- # nvmfpid=77873 00:12:08.208 07:18:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:08.208 07:18:09 -- nvmf/common.sh@470 -- # waitforlisten 77873 00:12:08.208 07:18:09 -- common/autotest_common.sh@819 -- # '[' -z 77873 ']' 00:12:08.208 07:18:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.208 07:18:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:08.208 07:18:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.208 07:18:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:08.208 07:18:09 -- common/autotest_common.sh@10 -- # set +x 00:12:08.208 [2024-11-04 07:18:10.022664] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:12:08.208 [2024-11-04 07:18:10.022743] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.466 [2024-11-04 07:18:10.161259] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:08.466 [2024-11-04 07:18:10.236198] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:08.466 [2024-11-04 07:18:10.236931] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:08.466 [2024-11-04 07:18:10.237185] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:08.466 [2024-11-04 07:18:10.237420] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:08.466 [2024-11-04 07:18:10.237811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.466 [2024-11-04 07:18:10.237952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:08.466 [2024-11-04 07:18:10.238023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:08.466 [2024-11-04 07:18:10.238025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.399 07:18:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:09.399 07:18:11 -- common/autotest_common.sh@852 -- # return 0 00:12:09.399 07:18:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:09.399 07:18:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:09.399 07:18:11 -- common/autotest_common.sh@10 -- # set +x 00:12:09.399 07:18:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:09.399 07:18:11 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:09.399 07:18:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:09.399 07:18:11 -- common/autotest_common.sh@10 -- # set +x 00:12:09.399 07:18:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:09.399 07:18:11 -- target/rpc.sh@26 -- # stats='{ 00:12:09.399 "poll_groups": [ 00:12:09.399 { 00:12:09.399 "admin_qpairs": 0, 00:12:09.399 "completed_nvme_io": 0, 00:12:09.399 "current_admin_qpairs": 0, 00:12:09.399 "current_io_qpairs": 0, 00:12:09.399 "io_qpairs": 0, 00:12:09.399 "name": "nvmf_tgt_poll_group_0", 00:12:09.399 "pending_bdev_io": 0, 00:12:09.399 "transports": [] 00:12:09.399 }, 00:12:09.399 { 00:12:09.399 "admin_qpairs": 0, 00:12:09.399 "completed_nvme_io": 0, 00:12:09.399 "current_admin_qpairs": 0, 00:12:09.399 "current_io_qpairs": 0, 00:12:09.399 "io_qpairs": 0, 00:12:09.399 "name": "nvmf_tgt_poll_group_1", 00:12:09.399 "pending_bdev_io": 0, 00:12:09.399 "transports": [] 00:12:09.399 }, 00:12:09.399 { 00:12:09.399 "admin_qpairs": 0, 00:12:09.399 "completed_nvme_io": 0, 00:12:09.399 "current_admin_qpairs": 0, 00:12:09.399 "current_io_qpairs": 0, 00:12:09.399 "io_qpairs": 0, 00:12:09.399 "name": "nvmf_tgt_poll_group_2", 00:12:09.399 "pending_bdev_io": 0, 00:12:09.399 "transports": [] 00:12:09.399 }, 00:12:09.399 { 00:12:09.399 "admin_qpairs": 0, 00:12:09.399 "completed_nvme_io": 0, 00:12:09.399 "current_admin_qpairs": 0, 00:12:09.399 "current_io_qpairs": 0, 00:12:09.399 "io_qpairs": 0, 00:12:09.399 "name": "nvmf_tgt_poll_group_3", 00:12:09.399 "pending_bdev_io": 0, 00:12:09.399 "transports": [] 00:12:09.399 } 00:12:09.399 ], 00:12:09.399 "tick_rate": 2200000000 00:12:09.399 }' 00:12:09.399 07:18:11 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:09.399 07:18:11 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:09.399 07:18:11 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:09.399 07:18:11 -- target/rpc.sh@15 -- # wc -l 00:12:09.399 07:18:11 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:09.399 07:18:11 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:09.399 07:18:11 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:09.399 07:18:11 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:09.399 07:18:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:09.399 07:18:11 -- common/autotest_common.sh@10 -- # set +x 00:12:09.399 [2024-11-04 07:18:11.211156] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:09.399 07:18:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:09.399 07:18:11 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:09.399 07:18:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:09.399 07:18:11 -- common/autotest_common.sh@10 -- # set +x 00:12:09.657 07:18:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:09.657 07:18:11 -- target/rpc.sh@33 -- # stats='{ 00:12:09.657 "poll_groups": [ 00:12:09.657 { 00:12:09.657 "admin_qpairs": 0, 00:12:09.657 "completed_nvme_io": 0, 00:12:09.657 "current_admin_qpairs": 0, 00:12:09.657 "current_io_qpairs": 0, 00:12:09.657 "io_qpairs": 0, 00:12:09.657 "name": "nvmf_tgt_poll_group_0", 00:12:09.657 "pending_bdev_io": 0, 00:12:09.657 "transports": [ 00:12:09.657 { 00:12:09.657 "trtype": "TCP" 00:12:09.657 } 00:12:09.657 ] 00:12:09.657 }, 00:12:09.657 { 00:12:09.657 "admin_qpairs": 0, 00:12:09.657 "completed_nvme_io": 0, 00:12:09.657 "current_admin_qpairs": 0, 00:12:09.657 "current_io_qpairs": 0, 00:12:09.657 "io_qpairs": 0, 00:12:09.657 "name": "nvmf_tgt_poll_group_1", 00:12:09.657 "pending_bdev_io": 0, 00:12:09.657 "transports": [ 00:12:09.657 { 00:12:09.657 "trtype": "TCP" 00:12:09.657 } 00:12:09.657 ] 00:12:09.657 }, 00:12:09.657 { 00:12:09.657 "admin_qpairs": 0, 00:12:09.657 "completed_nvme_io": 0, 00:12:09.657 "current_admin_qpairs": 0, 00:12:09.657 "current_io_qpairs": 0, 00:12:09.657 "io_qpairs": 0, 00:12:09.657 "name": "nvmf_tgt_poll_group_2", 00:12:09.657 "pending_bdev_io": 0, 00:12:09.657 "transports": [ 00:12:09.657 { 00:12:09.657 "trtype": "TCP" 00:12:09.657 } 00:12:09.657 ] 00:12:09.657 }, 00:12:09.657 { 00:12:09.657 "admin_qpairs": 0, 00:12:09.657 "completed_nvme_io": 0, 00:12:09.657 "current_admin_qpairs": 0, 00:12:09.657 "current_io_qpairs": 0, 00:12:09.657 "io_qpairs": 0, 00:12:09.657 "name": "nvmf_tgt_poll_group_3", 00:12:09.658 "pending_bdev_io": 0, 00:12:09.658 "transports": [ 00:12:09.658 { 00:12:09.658 "trtype": "TCP" 00:12:09.658 } 00:12:09.658 ] 00:12:09.658 } 00:12:09.658 ], 00:12:09.658 "tick_rate": 2200000000 00:12:09.658 }' 00:12:09.658 07:18:11 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:09.658 07:18:11 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:09.658 07:18:11 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:09.658 07:18:11 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:09.658 07:18:11 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:09.658 07:18:11 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:09.658 07:18:11 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:09.658 07:18:11 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:09.658 07:18:11 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:09.658 07:18:11 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:09.658 07:18:11 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:09.658 07:18:11 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:09.658 07:18:11 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:09.658 07:18:11 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:09.658 07:18:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:09.658 07:18:11 -- common/autotest_common.sh@10 -- # set +x 00:12:09.658 Malloc1 00:12:09.658 07:18:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:09.658 07:18:11 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:09.658 07:18:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:09.658 07:18:11 -- common/autotest_common.sh@10 -- # set +x 00:12:09.658 07:18:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:09.658 07:18:11 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:09.658 07:18:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:09.658 07:18:11 -- common/autotest_common.sh@10 -- # set +x 00:12:09.658 07:18:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:09.658 07:18:11 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:09.658 07:18:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:09.658 07:18:11 -- common/autotest_common.sh@10 -- # set +x 00:12:09.658 07:18:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:09.658 07:18:11 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:09.658 07:18:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:09.658 07:18:11 -- common/autotest_common.sh@10 -- # set +x 00:12:09.658 [2024-11-04 07:18:11.423401] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:09.658 07:18:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:09.658 07:18:11 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -a 10.0.0.2 -s 4420 00:12:09.658 07:18:11 -- common/autotest_common.sh@640 -- # local es=0 00:12:09.658 07:18:11 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -a 10.0.0.2 -s 4420 00:12:09.658 07:18:11 -- common/autotest_common.sh@628 -- # local arg=nvme 00:12:09.658 07:18:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:09.658 07:18:11 -- common/autotest_common.sh@632 -- # type -t nvme 00:12:09.658 07:18:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:09.658 07:18:11 -- common/autotest_common.sh@634 -- # type -P nvme 00:12:09.658 07:18:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:09.658 07:18:11 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:12:09.658 07:18:11 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:12:09.658 07:18:11 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -a 10.0.0.2 -s 4420 00:12:09.658 [2024-11-04 07:18:11.451693] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a' 00:12:09.658 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:09.658 could not add new controller: failed to write to nvme-fabrics device 00:12:09.658 07:18:11 -- common/autotest_common.sh@643 -- # es=1 00:12:09.658 07:18:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:09.658 07:18:11 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:09.658 07:18:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:09.658 07:18:11 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:12:09.658 07:18:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:09.658 07:18:11 -- common/autotest_common.sh@10 -- # set +x 00:12:09.658 07:18:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:09.658 07:18:11 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:09.916 07:18:11 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:09.916 07:18:11 -- common/autotest_common.sh@1177 -- # local i=0 00:12:09.916 07:18:11 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:09.916 07:18:11 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:09.916 07:18:11 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:11.815 07:18:13 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:11.815 07:18:13 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:11.815 07:18:13 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:12.073 07:18:13 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:12.073 07:18:13 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:12.073 07:18:13 -- common/autotest_common.sh@1187 -- # return 0 00:12:12.073 07:18:13 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:12.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.073 07:18:13 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:12.073 07:18:13 -- common/autotest_common.sh@1198 -- # local i=0 00:12:12.073 07:18:13 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:12.073 07:18:13 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:12.073 07:18:13 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:12.073 07:18:13 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:12.073 07:18:13 -- common/autotest_common.sh@1210 -- # return 0 00:12:12.073 07:18:13 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:12:12.073 07:18:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:12.073 07:18:13 -- common/autotest_common.sh@10 -- # set +x 00:12:12.073 07:18:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:12.073 07:18:13 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:12.073 07:18:13 -- common/autotest_common.sh@640 -- # local es=0 00:12:12.073 07:18:13 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:12.073 07:18:13 -- common/autotest_common.sh@628 -- # local arg=nvme 00:12:12.073 07:18:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:12.073 07:18:13 -- common/autotest_common.sh@632 -- # type -t nvme 00:12:12.073 07:18:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:12.073 07:18:13 -- common/autotest_common.sh@634 -- # type -P nvme 00:12:12.073 07:18:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:12.073 07:18:13 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:12:12.073 07:18:13 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:12:12.073 07:18:13 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:12.073 [2024-11-04 07:18:13.763191] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a' 00:12:12.073 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:12.073 could not add new controller: failed to write to nvme-fabrics device 00:12:12.073 07:18:13 -- common/autotest_common.sh@643 -- # es=1 00:12:12.073 07:18:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:12.073 07:18:13 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:12.073 07:18:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:12.073 07:18:13 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:12.073 07:18:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:12.073 07:18:13 -- common/autotest_common.sh@10 -- # set +x 00:12:12.073 07:18:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:12.073 07:18:13 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:12.332 07:18:13 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:12.332 07:18:13 -- common/autotest_common.sh@1177 -- # local i=0 00:12:12.332 07:18:13 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:12.332 07:18:13 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:12.332 07:18:13 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:14.263 07:18:15 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:14.263 07:18:15 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:14.263 07:18:15 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:14.263 07:18:15 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:14.263 07:18:15 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:14.263 07:18:15 -- common/autotest_common.sh@1187 -- # return 0 00:12:14.263 07:18:15 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:14.263 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.263 07:18:16 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:14.263 07:18:16 -- common/autotest_common.sh@1198 -- # local i=0 00:12:14.263 07:18:16 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:14.263 07:18:16 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:14.263 07:18:16 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:14.263 07:18:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:14.263 07:18:16 -- common/autotest_common.sh@1210 -- # return 0 00:12:14.263 07:18:16 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:14.263 07:18:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:14.263 07:18:16 -- common/autotest_common.sh@10 -- # set +x 00:12:14.263 07:18:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:14.263 07:18:16 -- target/rpc.sh@81 -- # seq 1 5 00:12:14.263 07:18:16 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:14.263 07:18:16 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:14.263 07:18:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:14.263 07:18:16 -- common/autotest_common.sh@10 -- # set +x 00:12:14.263 07:18:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:14.263 07:18:16 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:14.263 07:18:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:14.263 07:18:16 -- common/autotest_common.sh@10 -- # set +x 00:12:14.263 [2024-11-04 07:18:16.068225] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:14.263 07:18:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:14.263 07:18:16 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:14.263 07:18:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:14.263 07:18:16 -- common/autotest_common.sh@10 -- # set +x 00:12:14.263 07:18:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:14.263 07:18:16 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:14.263 07:18:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:14.263 07:18:16 -- common/autotest_common.sh@10 -- # set +x 00:12:14.263 07:18:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:14.263 07:18:16 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:14.521 07:18:16 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:14.521 07:18:16 -- common/autotest_common.sh@1177 -- # local i=0 00:12:14.521 07:18:16 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:14.521 07:18:16 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:14.521 07:18:16 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:16.426 07:18:18 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:16.426 07:18:18 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:16.426 07:18:18 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:16.684 07:18:18 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:16.684 07:18:18 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:16.684 07:18:18 -- common/autotest_common.sh@1187 -- # return 0 00:12:16.684 07:18:18 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:16.684 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.684 07:18:18 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:16.684 07:18:18 -- common/autotest_common.sh@1198 -- # local i=0 00:12:16.684 07:18:18 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:16.684 07:18:18 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:16.684 07:18:18 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:16.684 07:18:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:16.684 07:18:18 -- common/autotest_common.sh@1210 -- # return 0 00:12:16.684 07:18:18 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:16.684 07:18:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:16.684 07:18:18 -- common/autotest_common.sh@10 -- # set +x 00:12:16.684 07:18:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:16.684 07:18:18 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:16.684 07:18:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:16.684 07:18:18 -- common/autotest_common.sh@10 -- # set +x 00:12:16.684 07:18:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:16.684 07:18:18 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:16.684 07:18:18 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:16.684 07:18:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:16.684 07:18:18 -- common/autotest_common.sh@10 -- # set +x 00:12:16.684 07:18:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:16.684 07:18:18 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:16.684 07:18:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:16.684 07:18:18 -- common/autotest_common.sh@10 -- # set +x 00:12:16.684 [2024-11-04 07:18:18.472594] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:16.684 07:18:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:16.684 07:18:18 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:16.684 07:18:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:16.684 07:18:18 -- common/autotest_common.sh@10 -- # set +x 00:12:16.684 07:18:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:16.684 07:18:18 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:16.684 07:18:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:16.684 07:18:18 -- common/autotest_common.sh@10 -- # set +x 00:12:16.684 07:18:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:16.684 07:18:18 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:16.942 07:18:18 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:16.942 07:18:18 -- common/autotest_common.sh@1177 -- # local i=0 00:12:16.942 07:18:18 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:16.942 07:18:18 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:16.942 07:18:18 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:18.841 07:18:20 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:18.841 07:18:20 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:18.841 07:18:20 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:19.100 07:18:20 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:19.100 07:18:20 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:19.100 07:18:20 -- common/autotest_common.sh@1187 -- # return 0 00:12:19.100 07:18:20 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:19.100 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.100 07:18:20 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:19.100 07:18:20 -- common/autotest_common.sh@1198 -- # local i=0 00:12:19.100 07:18:20 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:19.100 07:18:20 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.100 07:18:20 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.100 07:18:20 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:19.100 07:18:20 -- common/autotest_common.sh@1210 -- # return 0 00:12:19.100 07:18:20 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:19.100 07:18:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.100 07:18:20 -- common/autotest_common.sh@10 -- # set +x 00:12:19.100 07:18:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.100 07:18:20 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:19.100 07:18:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.100 07:18:20 -- common/autotest_common.sh@10 -- # set +x 00:12:19.100 07:18:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.100 07:18:20 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:19.100 07:18:20 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:19.100 07:18:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.100 07:18:20 -- common/autotest_common.sh@10 -- # set +x 00:12:19.100 07:18:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.100 07:18:20 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:19.100 07:18:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.100 07:18:20 -- common/autotest_common.sh@10 -- # set +x 00:12:19.100 [2024-11-04 07:18:20.876990] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:19.100 07:18:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.100 07:18:20 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:19.100 07:18:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.100 07:18:20 -- common/autotest_common.sh@10 -- # set +x 00:12:19.100 07:18:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.100 07:18:20 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:19.100 07:18:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.100 07:18:20 -- common/autotest_common.sh@10 -- # set +x 00:12:19.100 07:18:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.100 07:18:20 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:19.358 07:18:21 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:19.358 07:18:21 -- common/autotest_common.sh@1177 -- # local i=0 00:12:19.358 07:18:21 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:19.358 07:18:21 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:19.358 07:18:21 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:21.258 07:18:23 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:21.258 07:18:23 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:21.258 07:18:23 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:21.258 07:18:23 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:21.258 07:18:23 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:21.258 07:18:23 -- common/autotest_common.sh@1187 -- # return 0 00:12:21.258 07:18:23 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:21.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.516 07:18:23 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:21.516 07:18:23 -- common/autotest_common.sh@1198 -- # local i=0 00:12:21.516 07:18:23 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:21.516 07:18:23 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:21.516 07:18:23 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:21.516 07:18:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:21.516 07:18:23 -- common/autotest_common.sh@1210 -- # return 0 00:12:21.516 07:18:23 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:21.516 07:18:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:21.516 07:18:23 -- common/autotest_common.sh@10 -- # set +x 00:12:21.516 07:18:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:21.516 07:18:23 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:21.516 07:18:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:21.516 07:18:23 -- common/autotest_common.sh@10 -- # set +x 00:12:21.516 07:18:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:21.516 07:18:23 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:21.516 07:18:23 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:21.516 07:18:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:21.516 07:18:23 -- common/autotest_common.sh@10 -- # set +x 00:12:21.516 07:18:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:21.516 07:18:23 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:21.516 07:18:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:21.516 07:18:23 -- common/autotest_common.sh@10 -- # set +x 00:12:21.516 [2024-11-04 07:18:23.177435] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:21.516 07:18:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:21.516 07:18:23 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:21.516 07:18:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:21.516 07:18:23 -- common/autotest_common.sh@10 -- # set +x 00:12:21.516 07:18:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:21.516 07:18:23 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:21.516 07:18:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:21.516 07:18:23 -- common/autotest_common.sh@10 -- # set +x 00:12:21.516 07:18:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:21.516 07:18:23 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:21.774 07:18:23 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:21.774 07:18:23 -- common/autotest_common.sh@1177 -- # local i=0 00:12:21.774 07:18:23 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:21.774 07:18:23 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:21.774 07:18:23 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:23.674 07:18:25 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:23.674 07:18:25 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:23.674 07:18:25 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:23.674 07:18:25 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:23.674 07:18:25 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:23.674 07:18:25 -- common/autotest_common.sh@1187 -- # return 0 00:12:23.674 07:18:25 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:23.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.932 07:18:25 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:23.932 07:18:25 -- common/autotest_common.sh@1198 -- # local i=0 00:12:23.932 07:18:25 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:23.932 07:18:25 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.932 07:18:25 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.932 07:18:25 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:23.932 07:18:25 -- common/autotest_common.sh@1210 -- # return 0 00:12:23.932 07:18:25 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:23.932 07:18:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:23.932 07:18:25 -- common/autotest_common.sh@10 -- # set +x 00:12:23.932 07:18:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:23.932 07:18:25 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:23.932 07:18:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:23.932 07:18:25 -- common/autotest_common.sh@10 -- # set +x 00:12:23.932 07:18:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:23.932 07:18:25 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:23.932 07:18:25 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:23.932 07:18:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:23.932 07:18:25 -- common/autotest_common.sh@10 -- # set +x 00:12:23.932 07:18:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:23.932 07:18:25 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:23.932 07:18:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:23.932 07:18:25 -- common/autotest_common.sh@10 -- # set +x 00:12:23.932 [2024-11-04 07:18:25.577781] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:23.932 07:18:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:23.932 07:18:25 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:23.932 07:18:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:23.932 07:18:25 -- common/autotest_common.sh@10 -- # set +x 00:12:23.932 07:18:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:23.932 07:18:25 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:23.932 07:18:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:23.932 07:18:25 -- common/autotest_common.sh@10 -- # set +x 00:12:23.932 07:18:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:23.932 07:18:25 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:23.932 07:18:25 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:23.932 07:18:25 -- common/autotest_common.sh@1177 -- # local i=0 00:12:23.932 07:18:25 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:23.932 07:18:25 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:23.932 07:18:25 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:26.462 07:18:27 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:26.462 07:18:27 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:26.462 07:18:27 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:26.462 07:18:27 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:26.462 07:18:27 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:26.462 07:18:27 -- common/autotest_common.sh@1187 -- # return 0 00:12:26.462 07:18:27 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:26.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.462 07:18:27 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:26.462 07:18:27 -- common/autotest_common.sh@1198 -- # local i=0 00:12:26.462 07:18:27 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:26.462 07:18:27 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.462 07:18:27 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:26.463 07:18:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.463 07:18:27 -- common/autotest_common.sh@1210 -- # return 0 00:12:26.463 07:18:27 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:26.463 07:18:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.463 07:18:27 -- common/autotest_common.sh@10 -- # set +x 00:12:26.463 07:18:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.463 07:18:27 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.463 07:18:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.463 07:18:27 -- common/autotest_common.sh@10 -- # set +x 00:12:26.463 07:18:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.463 07:18:27 -- target/rpc.sh@99 -- # seq 1 5 00:12:26.463 07:18:27 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:26.463 07:18:27 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:26.463 07:18:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.463 07:18:27 -- common/autotest_common.sh@10 -- # set +x 00:12:26.463 07:18:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.463 07:18:27 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.463 07:18:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.463 07:18:27 -- common/autotest_common.sh@10 -- # set +x 00:12:26.463 [2024-11-04 07:18:27.898041] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.463 07:18:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.463 07:18:27 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:26.463 07:18:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.463 07:18:27 -- common/autotest_common.sh@10 -- # set +x 00:12:26.463 07:18:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.463 07:18:27 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:26.463 07:18:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.463 07:18:27 -- common/autotest_common.sh@10 -- # set +x 00:12:26.463 07:18:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.463 07:18:27 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:26.463 07:18:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.463 07:18:27 -- common/autotest_common.sh@10 -- # set +x 00:12:26.463 07:18:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.463 07:18:27 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.463 07:18:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.463 07:18:27 -- common/autotest_common.sh@10 -- # set +x 00:12:26.463 07:18:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.463 07:18:27 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:26.463 07:18:27 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:26.463 07:18:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.463 07:18:27 -- common/autotest_common.sh@10 -- # set +x 00:12:26.463 07:18:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.463 07:18:27 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.463 07:18:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.463 07:18:27 -- common/autotest_common.sh@10 -- # set +x 00:12:26.463 [2024-11-04 07:18:27.946066] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.463 07:18:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.463 07:18:27 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:26.463 07:18:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.463 07:18:27 -- common/autotest_common.sh@10 -- # set +x 00:12:26.463 07:18:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.463 07:18:27 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:26.463 07:18:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.463 07:18:27 -- common/autotest_common.sh@10 -- # set +x 00:12:26.463 07:18:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.463 07:18:27 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:26.463 07:18:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.463 07:18:27 -- common/autotest_common.sh@10 -- # set +x 00:12:26.463 07:18:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.463 07:18:27 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.463 07:18:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.463 07:18:27 -- common/autotest_common.sh@10 -- # set +x 00:12:26.463 07:18:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.463 07:18:27 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:26.463 07:18:27 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:26.463 07:18:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.463 07:18:27 -- common/autotest_common.sh@10 -- # set +x 00:12:26.463 07:18:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.463 07:18:27 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.463 07:18:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.463 07:18:27 -- common/autotest_common.sh@10 -- # set +x 00:12:26.463 [2024-11-04 07:18:27.998173] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.463 07:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.463 07:18:28 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:26.463 07:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.463 07:18:28 -- common/autotest_common.sh@10 -- # set +x 00:12:26.463 07:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.463 07:18:28 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:26.463 07:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.463 07:18:28 -- common/autotest_common.sh@10 -- # set +x 00:12:26.463 07:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.463 07:18:28 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:26.463 07:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.463 07:18:28 -- common/autotest_common.sh@10 -- # set +x 00:12:26.463 07:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.463 07:18:28 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.463 07:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.463 07:18:28 -- common/autotest_common.sh@10 -- # set +x 00:12:26.463 07:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.463 07:18:28 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:26.463 07:18:28 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:26.463 07:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.463 07:18:28 -- common/autotest_common.sh@10 -- # set +x 00:12:26.463 07:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.463 07:18:28 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.463 07:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.463 07:18:28 -- common/autotest_common.sh@10 -- # set +x 00:12:26.463 [2024-11-04 07:18:28.046273] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.463 07:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.463 07:18:28 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:26.463 07:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.463 07:18:28 -- common/autotest_common.sh@10 -- # set +x 00:12:26.463 07:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.463 07:18:28 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:26.463 07:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.463 07:18:28 -- common/autotest_common.sh@10 -- # set +x 00:12:26.463 07:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.463 07:18:28 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:26.463 07:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.463 07:18:28 -- common/autotest_common.sh@10 -- # set +x 00:12:26.463 07:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.463 07:18:28 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.463 07:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.463 07:18:28 -- common/autotest_common.sh@10 -- # set +x 00:12:26.463 07:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.463 07:18:28 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:26.463 07:18:28 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:26.463 07:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.463 07:18:28 -- common/autotest_common.sh@10 -- # set +x 00:12:26.463 07:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.463 07:18:28 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.463 07:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.463 07:18:28 -- common/autotest_common.sh@10 -- # set +x 00:12:26.463 [2024-11-04 07:18:28.094342] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.463 07:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.463 07:18:28 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:26.463 07:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.463 07:18:28 -- common/autotest_common.sh@10 -- # set +x 00:12:26.463 07:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.463 07:18:28 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:26.463 07:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.463 07:18:28 -- common/autotest_common.sh@10 -- # set +x 00:12:26.463 07:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.463 07:18:28 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:26.463 07:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.463 07:18:28 -- common/autotest_common.sh@10 -- # set +x 00:12:26.463 07:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.463 07:18:28 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.464 07:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.464 07:18:28 -- common/autotest_common.sh@10 -- # set +x 00:12:26.464 07:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.464 07:18:28 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:26.464 07:18:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.464 07:18:28 -- common/autotest_common.sh@10 -- # set +x 00:12:26.464 07:18:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.464 07:18:28 -- target/rpc.sh@110 -- # stats='{ 00:12:26.464 "poll_groups": [ 00:12:26.464 { 00:12:26.464 "admin_qpairs": 2, 00:12:26.464 "completed_nvme_io": 66, 00:12:26.464 "current_admin_qpairs": 0, 00:12:26.464 "current_io_qpairs": 0, 00:12:26.464 "io_qpairs": 16, 00:12:26.464 "name": "nvmf_tgt_poll_group_0", 00:12:26.464 "pending_bdev_io": 0, 00:12:26.464 "transports": [ 00:12:26.464 { 00:12:26.464 "trtype": "TCP" 00:12:26.464 } 00:12:26.464 ] 00:12:26.464 }, 00:12:26.464 { 00:12:26.464 "admin_qpairs": 3, 00:12:26.464 "completed_nvme_io": 67, 00:12:26.464 "current_admin_qpairs": 0, 00:12:26.464 "current_io_qpairs": 0, 00:12:26.464 "io_qpairs": 17, 00:12:26.464 "name": "nvmf_tgt_poll_group_1", 00:12:26.464 "pending_bdev_io": 0, 00:12:26.464 "transports": [ 00:12:26.464 { 00:12:26.464 "trtype": "TCP" 00:12:26.464 } 00:12:26.464 ] 00:12:26.464 }, 00:12:26.464 { 00:12:26.464 "admin_qpairs": 1, 00:12:26.464 "completed_nvme_io": 169, 00:12:26.464 "current_admin_qpairs": 0, 00:12:26.464 "current_io_qpairs": 0, 00:12:26.464 "io_qpairs": 19, 00:12:26.464 "name": "nvmf_tgt_poll_group_2", 00:12:26.464 "pending_bdev_io": 0, 00:12:26.464 "transports": [ 00:12:26.464 { 00:12:26.464 "trtype": "TCP" 00:12:26.464 } 00:12:26.464 ] 00:12:26.464 }, 00:12:26.464 { 00:12:26.464 "admin_qpairs": 1, 00:12:26.464 "completed_nvme_io": 118, 00:12:26.464 "current_admin_qpairs": 0, 00:12:26.464 "current_io_qpairs": 0, 00:12:26.464 "io_qpairs": 18, 00:12:26.464 "name": "nvmf_tgt_poll_group_3", 00:12:26.464 "pending_bdev_io": 0, 00:12:26.464 "transports": [ 00:12:26.464 { 00:12:26.464 "trtype": "TCP" 00:12:26.464 } 00:12:26.464 ] 00:12:26.464 } 00:12:26.464 ], 00:12:26.464 "tick_rate": 2200000000 00:12:26.464 }' 00:12:26.464 07:18:28 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:26.464 07:18:28 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:26.464 07:18:28 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:26.464 07:18:28 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:26.464 07:18:28 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:26.464 07:18:28 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:26.464 07:18:28 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:26.464 07:18:28 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:26.464 07:18:28 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:26.464 07:18:28 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:12:26.464 07:18:28 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:26.464 07:18:28 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:26.464 07:18:28 -- target/rpc.sh@123 -- # nvmftestfini 00:12:26.464 07:18:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:26.464 07:18:28 -- nvmf/common.sh@116 -- # sync 00:12:26.723 07:18:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:26.723 07:18:28 -- nvmf/common.sh@119 -- # set +e 00:12:26.723 07:18:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:26.723 07:18:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:26.723 rmmod nvme_tcp 00:12:26.723 rmmod nvme_fabrics 00:12:26.723 rmmod nvme_keyring 00:12:26.723 07:18:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:26.723 07:18:28 -- nvmf/common.sh@123 -- # set -e 00:12:26.723 07:18:28 -- nvmf/common.sh@124 -- # return 0 00:12:26.723 07:18:28 -- nvmf/common.sh@477 -- # '[' -n 77873 ']' 00:12:26.723 07:18:28 -- nvmf/common.sh@478 -- # killprocess 77873 00:12:26.723 07:18:28 -- common/autotest_common.sh@926 -- # '[' -z 77873 ']' 00:12:26.723 07:18:28 -- common/autotest_common.sh@930 -- # kill -0 77873 00:12:26.723 07:18:28 -- common/autotest_common.sh@931 -- # uname 00:12:26.723 07:18:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:26.723 07:18:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77873 00:12:26.723 killing process with pid 77873 00:12:26.723 07:18:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:26.723 07:18:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:26.723 07:18:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77873' 00:12:26.723 07:18:28 -- common/autotest_common.sh@945 -- # kill 77873 00:12:26.723 07:18:28 -- common/autotest_common.sh@950 -- # wait 77873 00:12:26.981 07:18:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:26.981 07:18:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:26.981 07:18:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:26.982 07:18:28 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:26.982 07:18:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:26.982 07:18:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.982 07:18:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:26.982 07:18:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.982 07:18:28 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:26.982 00:12:26.982 real 0m19.229s 00:12:26.982 user 1m13.053s 00:12:26.982 sys 0m2.035s 00:12:26.982 07:18:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:26.982 07:18:28 -- common/autotest_common.sh@10 -- # set +x 00:12:26.982 ************************************ 00:12:26.982 END TEST nvmf_rpc 00:12:26.982 ************************************ 00:12:26.982 07:18:28 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:26.982 07:18:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:26.982 07:18:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:26.982 07:18:28 -- common/autotest_common.sh@10 -- # set +x 00:12:26.982 ************************************ 00:12:26.982 START TEST nvmf_invalid 00:12:26.982 ************************************ 00:12:26.982 07:18:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:27.240 * Looking for test storage... 00:12:27.240 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:27.240 07:18:28 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:27.240 07:18:28 -- nvmf/common.sh@7 -- # uname -s 00:12:27.240 07:18:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:27.240 07:18:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:27.240 07:18:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:27.240 07:18:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:27.240 07:18:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:27.240 07:18:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:27.240 07:18:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:27.240 07:18:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:27.240 07:18:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:27.240 07:18:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:27.240 07:18:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:12:27.240 07:18:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:12:27.240 07:18:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:27.240 07:18:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:27.240 07:18:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:27.240 07:18:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:27.240 07:18:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:27.240 07:18:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:27.240 07:18:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:27.240 07:18:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.241 07:18:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.241 07:18:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.241 07:18:28 -- paths/export.sh@5 -- # export PATH 00:12:27.241 07:18:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.241 07:18:28 -- nvmf/common.sh@46 -- # : 0 00:12:27.241 07:18:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:27.241 07:18:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:27.241 07:18:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:27.241 07:18:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:27.241 07:18:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:27.241 07:18:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:27.241 07:18:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:27.241 07:18:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:27.241 07:18:28 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:27.241 07:18:28 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:27.241 07:18:28 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:27.241 07:18:28 -- target/invalid.sh@14 -- # target=foobar 00:12:27.241 07:18:28 -- target/invalid.sh@16 -- # RANDOM=0 00:12:27.241 07:18:28 -- target/invalid.sh@34 -- # nvmftestinit 00:12:27.241 07:18:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:27.241 07:18:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:27.241 07:18:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:27.241 07:18:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:27.241 07:18:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:27.241 07:18:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.241 07:18:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:27.241 07:18:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.241 07:18:28 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:27.241 07:18:28 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:27.241 07:18:28 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:27.241 07:18:28 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:27.241 07:18:28 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:27.241 07:18:28 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:27.241 07:18:28 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:27.241 07:18:28 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:27.241 07:18:28 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:27.241 07:18:28 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:27.241 07:18:28 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:27.241 07:18:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:27.241 07:18:28 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:27.241 07:18:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:27.241 07:18:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:27.241 07:18:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:27.241 07:18:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:27.241 07:18:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:27.241 07:18:28 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:27.241 07:18:28 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:27.241 Cannot find device "nvmf_tgt_br" 00:12:27.241 07:18:28 -- nvmf/common.sh@154 -- # true 00:12:27.241 07:18:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:27.241 Cannot find device "nvmf_tgt_br2" 00:12:27.241 07:18:28 -- nvmf/common.sh@155 -- # true 00:12:27.241 07:18:28 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:27.241 07:18:28 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:27.241 Cannot find device "nvmf_tgt_br" 00:12:27.241 07:18:28 -- nvmf/common.sh@157 -- # true 00:12:27.241 07:18:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:27.241 Cannot find device "nvmf_tgt_br2" 00:12:27.241 07:18:28 -- nvmf/common.sh@158 -- # true 00:12:27.241 07:18:28 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:27.241 07:18:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:27.241 07:18:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:27.241 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:27.241 07:18:29 -- nvmf/common.sh@161 -- # true 00:12:27.241 07:18:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:27.241 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:27.241 07:18:29 -- nvmf/common.sh@162 -- # true 00:12:27.241 07:18:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:27.241 07:18:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:27.241 07:18:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:27.241 07:18:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:27.241 07:18:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:27.500 07:18:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:27.500 07:18:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:27.500 07:18:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:27.500 07:18:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:27.500 07:18:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:27.500 07:18:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:27.500 07:18:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:27.500 07:18:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:27.500 07:18:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:27.500 07:18:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:27.500 07:18:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:27.500 07:18:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:27.500 07:18:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:27.500 07:18:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:27.500 07:18:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:27.500 07:18:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:27.500 07:18:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:27.500 07:18:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:27.500 07:18:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:27.500 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:27.500 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:12:27.500 00:12:27.500 --- 10.0.0.2 ping statistics --- 00:12:27.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.500 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:12:27.500 07:18:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:27.500 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:27.500 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:12:27.500 00:12:27.500 --- 10.0.0.3 ping statistics --- 00:12:27.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.500 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:12:27.500 07:18:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:27.500 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:27.500 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:12:27.500 00:12:27.500 --- 10.0.0.1 ping statistics --- 00:12:27.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.500 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:12:27.500 07:18:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:27.500 07:18:29 -- nvmf/common.sh@421 -- # return 0 00:12:27.500 07:18:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:27.500 07:18:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:27.500 07:18:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:27.500 07:18:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:27.500 07:18:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:27.500 07:18:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:27.500 07:18:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:27.500 07:18:29 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:27.500 07:18:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:27.500 07:18:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:27.500 07:18:29 -- common/autotest_common.sh@10 -- # set +x 00:12:27.500 07:18:29 -- nvmf/common.sh@469 -- # nvmfpid=78382 00:12:27.500 07:18:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:27.500 07:18:29 -- nvmf/common.sh@470 -- # waitforlisten 78382 00:12:27.500 07:18:29 -- common/autotest_common.sh@819 -- # '[' -z 78382 ']' 00:12:27.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.500 07:18:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.500 07:18:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:27.500 07:18:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.500 07:18:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:27.500 07:18:29 -- common/autotest_common.sh@10 -- # set +x 00:12:27.500 [2024-11-04 07:18:29.319450] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:12:27.500 [2024-11-04 07:18:29.319716] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.758 [2024-11-04 07:18:29.460230] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:27.759 [2024-11-04 07:18:29.536125] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:27.759 [2024-11-04 07:18:29.536615] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.759 [2024-11-04 07:18:29.536640] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.759 [2024-11-04 07:18:29.536650] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.759 [2024-11-04 07:18:29.536781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.759 [2024-11-04 07:18:29.537468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:27.759 [2024-11-04 07:18:29.537633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:27.759 [2024-11-04 07:18:29.537646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.691 07:18:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:28.691 07:18:30 -- common/autotest_common.sh@852 -- # return 0 00:12:28.691 07:18:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:28.691 07:18:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:28.691 07:18:30 -- common/autotest_common.sh@10 -- # set +x 00:12:28.691 07:18:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:28.691 07:18:30 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:28.691 07:18:30 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode4004 00:12:28.949 [2024-11-04 07:18:30.576416] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:28.949 07:18:30 -- target/invalid.sh@40 -- # out='2024/11/04 07:18:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode4004 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:28.949 request: 00:12:28.949 { 00:12:28.949 "method": "nvmf_create_subsystem", 00:12:28.949 "params": { 00:12:28.949 "nqn": "nqn.2016-06.io.spdk:cnode4004", 00:12:28.949 "tgt_name": "foobar" 00:12:28.949 } 00:12:28.949 } 00:12:28.949 Got JSON-RPC error response 00:12:28.949 GoRPCClient: error on JSON-RPC call' 00:12:28.949 07:18:30 -- target/invalid.sh@41 -- # [[ 2024/11/04 07:18:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode4004 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:28.949 request: 00:12:28.949 { 00:12:28.949 "method": "nvmf_create_subsystem", 00:12:28.949 "params": { 00:12:28.949 "nqn": "nqn.2016-06.io.spdk:cnode4004", 00:12:28.949 "tgt_name": "foobar" 00:12:28.949 } 00:12:28.949 } 00:12:28.949 Got JSON-RPC error response 00:12:28.949 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:28.949 07:18:30 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:28.949 07:18:30 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode9455 00:12:29.207 [2024-11-04 07:18:30.856834] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9455: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:29.207 07:18:30 -- target/invalid.sh@45 -- # out='2024/11/04 07:18:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode9455 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:29.207 request: 00:12:29.207 { 00:12:29.207 "method": "nvmf_create_subsystem", 00:12:29.207 "params": { 00:12:29.207 "nqn": "nqn.2016-06.io.spdk:cnode9455", 00:12:29.207 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:29.207 } 00:12:29.207 } 00:12:29.207 Got JSON-RPC error response 00:12:29.207 GoRPCClient: error on JSON-RPC call' 00:12:29.207 07:18:30 -- target/invalid.sh@46 -- # [[ 2024/11/04 07:18:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode9455 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:29.207 request: 00:12:29.207 { 00:12:29.207 "method": "nvmf_create_subsystem", 00:12:29.207 "params": { 00:12:29.207 "nqn": "nqn.2016-06.io.spdk:cnode9455", 00:12:29.207 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:29.207 } 00:12:29.207 } 00:12:29.207 Got JSON-RPC error response 00:12:29.207 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:29.207 07:18:30 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:29.207 07:18:30 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode23107 00:12:29.466 [2024-11-04 07:18:31.117102] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23107: invalid model number 'SPDK_Controller' 00:12:29.466 07:18:31 -- target/invalid.sh@50 -- # out='2024/11/04 07:18:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode23107], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:29.466 request: 00:12:29.466 { 00:12:29.466 "method": "nvmf_create_subsystem", 00:12:29.466 "params": { 00:12:29.466 "nqn": "nqn.2016-06.io.spdk:cnode23107", 00:12:29.466 "model_number": "SPDK_Controller\u001f" 00:12:29.466 } 00:12:29.466 } 00:12:29.466 Got JSON-RPC error response 00:12:29.466 GoRPCClient: error on JSON-RPC call' 00:12:29.466 07:18:31 -- target/invalid.sh@51 -- # [[ 2024/11/04 07:18:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode23107], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:29.466 request: 00:12:29.466 { 00:12:29.466 "method": "nvmf_create_subsystem", 00:12:29.466 "params": { 00:12:29.466 "nqn": "nqn.2016-06.io.spdk:cnode23107", 00:12:29.466 "model_number": "SPDK_Controller\u001f" 00:12:29.466 } 00:12:29.466 } 00:12:29.466 Got JSON-RPC error response 00:12:29.466 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:29.466 07:18:31 -- target/invalid.sh@54 -- # gen_random_s 21 00:12:29.466 07:18:31 -- target/invalid.sh@19 -- # local length=21 ll 00:12:29.466 07:18:31 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:29.466 07:18:31 -- target/invalid.sh@21 -- # local chars 00:12:29.466 07:18:31 -- target/invalid.sh@22 -- # local string 00:12:29.466 07:18:31 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:29.466 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # printf %x 66 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # string+=B 00:12:29.466 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.466 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # printf %x 110 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # string+=n 00:12:29.466 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.466 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # printf %x 97 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # string+=a 00:12:29.466 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.466 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # printf %x 123 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # string+='{' 00:12:29.466 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.466 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # printf %x 35 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # string+='#' 00:12:29.466 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.466 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # printf %x 58 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # string+=: 00:12:29.466 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.466 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # printf %x 81 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # string+=Q 00:12:29.466 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.466 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # printf %x 85 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # string+=U 00:12:29.466 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.466 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # printf %x 113 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # string+=q 00:12:29.466 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.466 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # printf %x 97 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # string+=a 00:12:29.466 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.466 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # printf %x 68 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # string+=D 00:12:29.466 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.466 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # printf %x 84 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # string+=T 00:12:29.466 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.466 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # printf %x 97 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # string+=a 00:12:29.466 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.466 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # printf %x 82 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:29.466 07:18:31 -- target/invalid.sh@25 -- # string+=R 00:12:29.466 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.466 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.467 07:18:31 -- target/invalid.sh@25 -- # printf %x 101 00:12:29.467 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:29.467 07:18:31 -- target/invalid.sh@25 -- # string+=e 00:12:29.467 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.467 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.467 07:18:31 -- target/invalid.sh@25 -- # printf %x 83 00:12:29.467 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:29.467 07:18:31 -- target/invalid.sh@25 -- # string+=S 00:12:29.467 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.467 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.467 07:18:31 -- target/invalid.sh@25 -- # printf %x 109 00:12:29.467 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:29.467 07:18:31 -- target/invalid.sh@25 -- # string+=m 00:12:29.467 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.467 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.467 07:18:31 -- target/invalid.sh@25 -- # printf %x 70 00:12:29.467 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:29.467 07:18:31 -- target/invalid.sh@25 -- # string+=F 00:12:29.467 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.467 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.467 07:18:31 -- target/invalid.sh@25 -- # printf %x 126 00:12:29.467 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:29.467 07:18:31 -- target/invalid.sh@25 -- # string+='~' 00:12:29.467 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.467 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.467 07:18:31 -- target/invalid.sh@25 -- # printf %x 48 00:12:29.467 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:29.467 07:18:31 -- target/invalid.sh@25 -- # string+=0 00:12:29.467 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.467 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.467 07:18:31 -- target/invalid.sh@25 -- # printf %x 52 00:12:29.467 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:29.467 07:18:31 -- target/invalid.sh@25 -- # string+=4 00:12:29.467 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.467 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.467 07:18:31 -- target/invalid.sh@28 -- # [[ B == \- ]] 00:12:29.467 07:18:31 -- target/invalid.sh@31 -- # echo 'Bna{#:QUqaDTaReSmF~04' 00:12:29.467 07:18:31 -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'Bna{#:QUqaDTaReSmF~04' nqn.2016-06.io.spdk:cnode26320 00:12:29.725 [2024-11-04 07:18:31.461584] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26320: invalid serial number 'Bna{#:QUqaDTaReSmF~04' 00:12:29.725 07:18:31 -- target/invalid.sh@54 -- # out='2024/11/04 07:18:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode26320 serial_number:Bna{#:QUqaDTaReSmF~04], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN Bna{#:QUqaDTaReSmF~04 00:12:29.725 request: 00:12:29.725 { 00:12:29.725 "method": "nvmf_create_subsystem", 00:12:29.725 "params": { 00:12:29.725 "nqn": "nqn.2016-06.io.spdk:cnode26320", 00:12:29.725 "serial_number": "Bna{#:QUqaDTaReSmF~04" 00:12:29.725 } 00:12:29.725 } 00:12:29.725 Got JSON-RPC error response 00:12:29.725 GoRPCClient: error on JSON-RPC call' 00:12:29.725 07:18:31 -- target/invalid.sh@55 -- # [[ 2024/11/04 07:18:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode26320 serial_number:Bna{#:QUqaDTaReSmF~04], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN Bna{#:QUqaDTaReSmF~04 00:12:29.725 request: 00:12:29.725 { 00:12:29.725 "method": "nvmf_create_subsystem", 00:12:29.725 "params": { 00:12:29.725 "nqn": "nqn.2016-06.io.spdk:cnode26320", 00:12:29.725 "serial_number": "Bna{#:QUqaDTaReSmF~04" 00:12:29.725 } 00:12:29.725 } 00:12:29.725 Got JSON-RPC error response 00:12:29.725 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:29.725 07:18:31 -- target/invalid.sh@58 -- # gen_random_s 41 00:12:29.725 07:18:31 -- target/invalid.sh@19 -- # local length=41 ll 00:12:29.725 07:18:31 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:29.725 07:18:31 -- target/invalid.sh@21 -- # local chars 00:12:29.725 07:18:31 -- target/invalid.sh@22 -- # local string 00:12:29.725 07:18:31 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:29.725 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.725 07:18:31 -- target/invalid.sh@25 -- # printf %x 106 00:12:29.725 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:29.725 07:18:31 -- target/invalid.sh@25 -- # string+=j 00:12:29.725 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.725 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.725 07:18:31 -- target/invalid.sh@25 -- # printf %x 82 00:12:29.725 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:29.725 07:18:31 -- target/invalid.sh@25 -- # string+=R 00:12:29.725 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.725 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.725 07:18:31 -- target/invalid.sh@25 -- # printf %x 51 00:12:29.725 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:29.725 07:18:31 -- target/invalid.sh@25 -- # string+=3 00:12:29.725 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.725 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.725 07:18:31 -- target/invalid.sh@25 -- # printf %x 37 00:12:29.725 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:29.725 07:18:31 -- target/invalid.sh@25 -- # string+=% 00:12:29.725 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.725 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.725 07:18:31 -- target/invalid.sh@25 -- # printf %x 127 00:12:29.725 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:29.725 07:18:31 -- target/invalid.sh@25 -- # string+=$'\177' 00:12:29.725 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.725 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.725 07:18:31 -- target/invalid.sh@25 -- # printf %x 82 00:12:29.725 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:29.725 07:18:31 -- target/invalid.sh@25 -- # string+=R 00:12:29.725 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.725 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.726 07:18:31 -- target/invalid.sh@25 -- # printf %x 107 00:12:29.726 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:29.726 07:18:31 -- target/invalid.sh@25 -- # string+=k 00:12:29.726 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.726 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.726 07:18:31 -- target/invalid.sh@25 -- # printf %x 92 00:12:29.726 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:29.726 07:18:31 -- target/invalid.sh@25 -- # string+='\' 00:12:29.726 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.726 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.726 07:18:31 -- target/invalid.sh@25 -- # printf %x 69 00:12:29.726 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:29.726 07:18:31 -- target/invalid.sh@25 -- # string+=E 00:12:29.726 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.726 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.726 07:18:31 -- target/invalid.sh@25 -- # printf %x 114 00:12:29.726 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:29.726 07:18:31 -- target/invalid.sh@25 -- # string+=r 00:12:29.726 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.726 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.726 07:18:31 -- target/invalid.sh@25 -- # printf %x 103 00:12:29.726 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:29.726 07:18:31 -- target/invalid.sh@25 -- # string+=g 00:12:29.726 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.726 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.726 07:18:31 -- target/invalid.sh@25 -- # printf %x 115 00:12:29.726 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:29.726 07:18:31 -- target/invalid.sh@25 -- # string+=s 00:12:29.726 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.726 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.726 07:18:31 -- target/invalid.sh@25 -- # printf %x 101 00:12:29.726 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:29.726 07:18:31 -- target/invalid.sh@25 -- # string+=e 00:12:29.726 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.726 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.726 07:18:31 -- target/invalid.sh@25 -- # printf %x 36 00:12:29.726 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:29.726 07:18:31 -- target/invalid.sh@25 -- # string+='$' 00:12:29.726 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.726 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.726 07:18:31 -- target/invalid.sh@25 -- # printf %x 99 00:12:29.726 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:29.726 07:18:31 -- target/invalid.sh@25 -- # string+=c 00:12:29.726 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.726 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.726 07:18:31 -- target/invalid.sh@25 -- # printf %x 126 00:12:29.726 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:29.726 07:18:31 -- target/invalid.sh@25 -- # string+='~' 00:12:29.726 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.726 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.726 07:18:31 -- target/invalid.sh@25 -- # printf %x 99 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # string+=c 00:12:29.984 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.984 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # printf %x 119 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # string+=w 00:12:29.984 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.984 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # printf %x 40 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # string+='(' 00:12:29.984 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.984 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # printf %x 121 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # string+=y 00:12:29.984 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.984 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # printf %x 69 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # string+=E 00:12:29.984 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.984 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # printf %x 54 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # string+=6 00:12:29.984 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.984 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # printf %x 123 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # string+='{' 00:12:29.984 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.984 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # printf %x 59 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # string+=';' 00:12:29.984 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.984 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # printf %x 118 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # string+=v 00:12:29.984 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.984 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # printf %x 87 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # string+=W 00:12:29.984 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.984 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # printf %x 80 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # string+=P 00:12:29.984 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.984 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # printf %x 88 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # string+=X 00:12:29.984 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.984 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # printf %x 35 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # string+='#' 00:12:29.984 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.984 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # printf %x 34 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # string+='"' 00:12:29.984 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.984 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # printf %x 78 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # string+=N 00:12:29.984 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.984 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # printf %x 92 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # string+='\' 00:12:29.984 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.984 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.984 07:18:31 -- target/invalid.sh@25 -- # printf %x 66 00:12:29.985 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:29.985 07:18:31 -- target/invalid.sh@25 -- # string+=B 00:12:29.985 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.985 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.985 07:18:31 -- target/invalid.sh@25 -- # printf %x 97 00:12:29.985 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:29.985 07:18:31 -- target/invalid.sh@25 -- # string+=a 00:12:29.985 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.985 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.985 07:18:31 -- target/invalid.sh@25 -- # printf %x 53 00:12:29.985 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:29.985 07:18:31 -- target/invalid.sh@25 -- # string+=5 00:12:29.985 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.985 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.985 07:18:31 -- target/invalid.sh@25 -- # printf %x 71 00:12:29.985 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:29.985 07:18:31 -- target/invalid.sh@25 -- # string+=G 00:12:29.985 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.985 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.985 07:18:31 -- target/invalid.sh@25 -- # printf %x 72 00:12:29.985 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:29.985 07:18:31 -- target/invalid.sh@25 -- # string+=H 00:12:29.985 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.985 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.985 07:18:31 -- target/invalid.sh@25 -- # printf %x 119 00:12:29.985 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:29.985 07:18:31 -- target/invalid.sh@25 -- # string+=w 00:12:29.985 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.985 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.985 07:18:31 -- target/invalid.sh@25 -- # printf %x 64 00:12:29.985 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:29.985 07:18:31 -- target/invalid.sh@25 -- # string+=@ 00:12:29.985 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.985 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.985 07:18:31 -- target/invalid.sh@25 -- # printf %x 39 00:12:29.985 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:29.985 07:18:31 -- target/invalid.sh@25 -- # string+=\' 00:12:29.985 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.985 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.985 07:18:31 -- target/invalid.sh@25 -- # printf %x 99 00:12:29.985 07:18:31 -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:29.985 07:18:31 -- target/invalid.sh@25 -- # string+=c 00:12:29.985 07:18:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:29.985 07:18:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:29.985 07:18:31 -- target/invalid.sh@28 -- # [[ j == \- ]] 00:12:29.985 07:18:31 -- target/invalid.sh@31 -- # echo 'jR3%Rk\Ergse$c~cw(yE6{;vWPX#"N\Ba5GHw@'\''c' 00:12:29.985 07:18:31 -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'jR3%Rk\Ergse$c~cw(yE6{;vWPX#"N\Ba5GHw@'\''c' nqn.2016-06.io.spdk:cnode7499 00:12:30.243 [2024-11-04 07:18:31.962313] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7499: invalid model number 'jR3%Rk\Ergse$c~cw(yE6{;vWPX#"N\Ba5GHw@'c' 00:12:30.243 07:18:31 -- target/invalid.sh@58 -- # out='2024/11/04 07:18:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:jR3%Rk\Ergse$c~cw(yE6{;vWPX#"N\Ba5GHw@'\''c nqn:nqn.2016-06.io.spdk:cnode7499], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN jR3%Rk\Ergse$c~cw(yE6{;vWPX#"N\Ba5GHw@'\''c 00:12:30.243 request: 00:12:30.243 { 00:12:30.243 "method": "nvmf_create_subsystem", 00:12:30.243 "params": { 00:12:30.243 "nqn": "nqn.2016-06.io.spdk:cnode7499", 00:12:30.243 "model_number": "jR3%\u007fRk\\Ergse$c~cw(yE6{;vWPX#\"N\\Ba5GHw@'\''c" 00:12:30.243 } 00:12:30.243 } 00:12:30.243 Got JSON-RPC error response 00:12:30.243 GoRPCClient: error on JSON-RPC call' 00:12:30.243 07:18:31 -- target/invalid.sh@59 -- # [[ 2024/11/04 07:18:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:jR3%Rk\Ergse$c~cw(yE6{;vWPX#"N\Ba5GHw@'c nqn:nqn.2016-06.io.spdk:cnode7499], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN jR3%Rk\Ergse$c~cw(yE6{;vWPX#"N\Ba5GHw@'c 00:12:30.243 request: 00:12:30.243 { 00:12:30.243 "method": "nvmf_create_subsystem", 00:12:30.243 "params": { 00:12:30.243 "nqn": "nqn.2016-06.io.spdk:cnode7499", 00:12:30.243 "model_number": "jR3%\u007fRk\\Ergse$c~cw(yE6{;vWPX#\"N\\Ba5GHw@'c" 00:12:30.243 } 00:12:30.243 } 00:12:30.243 Got JSON-RPC error response 00:12:30.243 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:30.243 07:18:31 -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:30.501 [2024-11-04 07:18:32.234771] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:30.501 07:18:32 -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:30.759 07:18:32 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:30.759 07:18:32 -- target/invalid.sh@67 -- # echo '' 00:12:30.759 07:18:32 -- target/invalid.sh@67 -- # head -n 1 00:12:30.759 07:18:32 -- target/invalid.sh@67 -- # IP= 00:12:30.759 07:18:32 -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:31.344 [2024-11-04 07:18:32.859496] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:31.344 07:18:32 -- target/invalid.sh@69 -- # out='2024/11/04 07:18:32 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:12:31.344 request: 00:12:31.344 { 00:12:31.344 "method": "nvmf_subsystem_remove_listener", 00:12:31.344 "params": { 00:12:31.344 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:31.344 "listen_address": { 00:12:31.344 "trtype": "tcp", 00:12:31.344 "traddr": "", 00:12:31.344 "trsvcid": "4421" 00:12:31.344 } 00:12:31.344 } 00:12:31.344 } 00:12:31.344 Got JSON-RPC error response 00:12:31.344 GoRPCClient: error on JSON-RPC call' 00:12:31.344 07:18:32 -- target/invalid.sh@70 -- # [[ 2024/11/04 07:18:32 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:12:31.344 request: 00:12:31.344 { 00:12:31.344 "method": "nvmf_subsystem_remove_listener", 00:12:31.344 "params": { 00:12:31.344 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:31.344 "listen_address": { 00:12:31.344 "trtype": "tcp", 00:12:31.344 "traddr": "", 00:12:31.344 "trsvcid": "4421" 00:12:31.344 } 00:12:31.344 } 00:12:31.344 } 00:12:31.344 Got JSON-RPC error response 00:12:31.344 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:31.344 07:18:32 -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25086 -i 0 00:12:31.344 [2024-11-04 07:18:33.135844] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25086: invalid cntlid range [0-65519] 00:12:31.344 07:18:33 -- target/invalid.sh@73 -- # out='2024/11/04 07:18:33 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode25086], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:12:31.344 request: 00:12:31.344 { 00:12:31.344 "method": "nvmf_create_subsystem", 00:12:31.344 "params": { 00:12:31.344 "nqn": "nqn.2016-06.io.spdk:cnode25086", 00:12:31.344 "min_cntlid": 0 00:12:31.344 } 00:12:31.344 } 00:12:31.344 Got JSON-RPC error response 00:12:31.344 GoRPCClient: error on JSON-RPC call' 00:12:31.344 07:18:33 -- target/invalid.sh@74 -- # [[ 2024/11/04 07:18:33 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode25086], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:12:31.344 request: 00:12:31.344 { 00:12:31.344 "method": "nvmf_create_subsystem", 00:12:31.344 "params": { 00:12:31.344 "nqn": "nqn.2016-06.io.spdk:cnode25086", 00:12:31.344 "min_cntlid": 0 00:12:31.344 } 00:12:31.344 } 00:12:31.344 Got JSON-RPC error response 00:12:31.344 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:31.344 07:18:33 -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9189 -i 65520 00:12:31.609 [2024-11-04 07:18:33.352140] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9189: invalid cntlid range [65520-65519] 00:12:31.609 07:18:33 -- target/invalid.sh@75 -- # out='2024/11/04 07:18:33 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode9189], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:12:31.609 request: 00:12:31.609 { 00:12:31.609 "method": "nvmf_create_subsystem", 00:12:31.609 "params": { 00:12:31.609 "nqn": "nqn.2016-06.io.spdk:cnode9189", 00:12:31.609 "min_cntlid": 65520 00:12:31.609 } 00:12:31.609 } 00:12:31.609 Got JSON-RPC error response 00:12:31.609 GoRPCClient: error on JSON-RPC call' 00:12:31.609 07:18:33 -- target/invalid.sh@76 -- # [[ 2024/11/04 07:18:33 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode9189], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:12:31.609 request: 00:12:31.609 { 00:12:31.609 "method": "nvmf_create_subsystem", 00:12:31.609 "params": { 00:12:31.609 "nqn": "nqn.2016-06.io.spdk:cnode9189", 00:12:31.609 "min_cntlid": 65520 00:12:31.609 } 00:12:31.609 } 00:12:31.609 Got JSON-RPC error response 00:12:31.609 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:31.609 07:18:33 -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30359 -I 0 00:12:31.867 [2024-11-04 07:18:33.652518] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30359: invalid cntlid range [1-0] 00:12:31.867 07:18:33 -- target/invalid.sh@77 -- # out='2024/11/04 07:18:33 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode30359], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:12:31.867 request: 00:12:31.867 { 00:12:31.867 "method": "nvmf_create_subsystem", 00:12:31.867 "params": { 00:12:31.867 "nqn": "nqn.2016-06.io.spdk:cnode30359", 00:12:31.867 "max_cntlid": 0 00:12:31.867 } 00:12:31.867 } 00:12:31.867 Got JSON-RPC error response 00:12:31.867 GoRPCClient: error on JSON-RPC call' 00:12:31.867 07:18:33 -- target/invalid.sh@78 -- # [[ 2024/11/04 07:18:33 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode30359], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:12:31.867 request: 00:12:31.867 { 00:12:31.867 "method": "nvmf_create_subsystem", 00:12:31.867 "params": { 00:12:31.867 "nqn": "nqn.2016-06.io.spdk:cnode30359", 00:12:31.867 "max_cntlid": 0 00:12:31.867 } 00:12:31.867 } 00:12:31.867 Got JSON-RPC error response 00:12:31.867 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:31.867 07:18:33 -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20205 -I 65520 00:12:32.126 [2024-11-04 07:18:33.940950] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20205: invalid cntlid range [1-65520] 00:12:32.126 07:18:33 -- target/invalid.sh@79 -- # out='2024/11/04 07:18:33 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode20205], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:12:32.126 request: 00:12:32.126 { 00:12:32.126 "method": "nvmf_create_subsystem", 00:12:32.126 "params": { 00:12:32.126 "nqn": "nqn.2016-06.io.spdk:cnode20205", 00:12:32.126 "max_cntlid": 65520 00:12:32.126 } 00:12:32.126 } 00:12:32.126 Got JSON-RPC error response 00:12:32.126 GoRPCClient: error on JSON-RPC call' 00:12:32.126 07:18:33 -- target/invalid.sh@80 -- # [[ 2024/11/04 07:18:33 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode20205], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:12:32.126 request: 00:12:32.126 { 00:12:32.126 "method": "nvmf_create_subsystem", 00:12:32.126 "params": { 00:12:32.126 "nqn": "nqn.2016-06.io.spdk:cnode20205", 00:12:32.126 "max_cntlid": 65520 00:12:32.126 } 00:12:32.126 } 00:12:32.126 Got JSON-RPC error response 00:12:32.126 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:32.126 07:18:33 -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15977 -i 6 -I 5 00:12:32.384 [2024-11-04 07:18:34.153308] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15977: invalid cntlid range [6-5] 00:12:32.384 07:18:34 -- target/invalid.sh@83 -- # out='2024/11/04 07:18:34 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode15977], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:12:32.384 request: 00:12:32.384 { 00:12:32.384 "method": "nvmf_create_subsystem", 00:12:32.384 "params": { 00:12:32.384 "nqn": "nqn.2016-06.io.spdk:cnode15977", 00:12:32.384 "min_cntlid": 6, 00:12:32.384 "max_cntlid": 5 00:12:32.384 } 00:12:32.384 } 00:12:32.384 Got JSON-RPC error response 00:12:32.384 GoRPCClient: error on JSON-RPC call' 00:12:32.384 07:18:34 -- target/invalid.sh@84 -- # [[ 2024/11/04 07:18:34 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode15977], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:12:32.384 request: 00:12:32.384 { 00:12:32.384 "method": "nvmf_create_subsystem", 00:12:32.384 "params": { 00:12:32.384 "nqn": "nqn.2016-06.io.spdk:cnode15977", 00:12:32.384 "min_cntlid": 6, 00:12:32.384 "max_cntlid": 5 00:12:32.384 } 00:12:32.384 } 00:12:32.384 Got JSON-RPC error response 00:12:32.384 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:32.384 07:18:34 -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:32.642 07:18:34 -- target/invalid.sh@87 -- # out='request: 00:12:32.642 { 00:12:32.642 "name": "foobar", 00:12:32.642 "method": "nvmf_delete_target", 00:12:32.642 "req_id": 1 00:12:32.642 } 00:12:32.642 Got JSON-RPC error response 00:12:32.642 response: 00:12:32.642 { 00:12:32.642 "code": -32602, 00:12:32.642 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:32.642 }' 00:12:32.642 07:18:34 -- target/invalid.sh@88 -- # [[ request: 00:12:32.642 { 00:12:32.642 "name": "foobar", 00:12:32.642 "method": "nvmf_delete_target", 00:12:32.642 "req_id": 1 00:12:32.642 } 00:12:32.642 Got JSON-RPC error response 00:12:32.642 response: 00:12:32.642 { 00:12:32.642 "code": -32602, 00:12:32.642 "message": "The specified target doesn't exist, cannot delete it." 00:12:32.642 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:32.642 07:18:34 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:32.642 07:18:34 -- target/invalid.sh@91 -- # nvmftestfini 00:12:32.642 07:18:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:32.642 07:18:34 -- nvmf/common.sh@116 -- # sync 00:12:32.642 07:18:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:32.642 07:18:34 -- nvmf/common.sh@119 -- # set +e 00:12:32.642 07:18:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:32.642 07:18:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:32.642 rmmod nvme_tcp 00:12:32.642 rmmod nvme_fabrics 00:12:32.642 rmmod nvme_keyring 00:12:32.642 07:18:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:32.642 07:18:34 -- nvmf/common.sh@123 -- # set -e 00:12:32.642 07:18:34 -- nvmf/common.sh@124 -- # return 0 00:12:32.642 07:18:34 -- nvmf/common.sh@477 -- # '[' -n 78382 ']' 00:12:32.642 07:18:34 -- nvmf/common.sh@478 -- # killprocess 78382 00:12:32.642 07:18:34 -- common/autotest_common.sh@926 -- # '[' -z 78382 ']' 00:12:32.642 07:18:34 -- common/autotest_common.sh@930 -- # kill -0 78382 00:12:32.642 07:18:34 -- common/autotest_common.sh@931 -- # uname 00:12:32.642 07:18:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:32.642 07:18:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78382 00:12:32.642 killing process with pid 78382 00:12:32.642 07:18:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:32.642 07:18:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:32.642 07:18:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78382' 00:12:32.642 07:18:34 -- common/autotest_common.sh@945 -- # kill 78382 00:12:32.643 07:18:34 -- common/autotest_common.sh@950 -- # wait 78382 00:12:32.901 07:18:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:32.901 07:18:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:32.901 07:18:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:32.901 07:18:34 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:32.901 07:18:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:32.901 07:18:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.901 07:18:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:32.901 07:18:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.901 07:18:34 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:32.901 ************************************ 00:12:32.901 END TEST nvmf_invalid 00:12:32.901 ************************************ 00:12:32.901 00:12:32.901 real 0m5.953s 00:12:32.901 user 0m23.787s 00:12:32.901 sys 0m1.313s 00:12:32.901 07:18:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:32.901 07:18:34 -- common/autotest_common.sh@10 -- # set +x 00:12:33.159 07:18:34 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:33.159 07:18:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:33.159 07:18:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:33.159 07:18:34 -- common/autotest_common.sh@10 -- # set +x 00:12:33.159 ************************************ 00:12:33.159 START TEST nvmf_abort 00:12:33.159 ************************************ 00:12:33.159 07:18:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:33.159 * Looking for test storage... 00:12:33.159 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:33.159 07:18:34 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:33.159 07:18:34 -- nvmf/common.sh@7 -- # uname -s 00:12:33.159 07:18:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:33.159 07:18:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:33.159 07:18:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:33.159 07:18:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:33.159 07:18:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:33.159 07:18:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:33.159 07:18:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:33.159 07:18:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:33.159 07:18:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:33.159 07:18:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:33.159 07:18:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:12:33.159 07:18:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:12:33.159 07:18:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:33.159 07:18:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:33.159 07:18:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:33.159 07:18:34 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:33.159 07:18:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:33.159 07:18:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:33.160 07:18:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:33.160 07:18:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.160 07:18:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.160 07:18:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.160 07:18:34 -- paths/export.sh@5 -- # export PATH 00:12:33.160 07:18:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.160 07:18:34 -- nvmf/common.sh@46 -- # : 0 00:12:33.160 07:18:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:33.160 07:18:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:33.160 07:18:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:33.160 07:18:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:33.160 07:18:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:33.160 07:18:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:33.160 07:18:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:33.160 07:18:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:33.160 07:18:34 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:33.160 07:18:34 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:33.160 07:18:34 -- target/abort.sh@14 -- # nvmftestinit 00:12:33.160 07:18:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:33.160 07:18:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:33.160 07:18:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:33.160 07:18:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:33.160 07:18:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:33.160 07:18:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.160 07:18:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:33.160 07:18:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.160 07:18:34 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:33.160 07:18:34 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:33.160 07:18:34 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:33.160 07:18:34 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:33.160 07:18:34 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:33.160 07:18:34 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:33.160 07:18:34 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:33.160 07:18:34 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:33.160 07:18:34 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:33.160 07:18:34 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:33.160 07:18:34 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:33.160 07:18:34 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:33.160 07:18:34 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:33.160 07:18:34 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:33.160 07:18:34 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:33.160 07:18:34 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:33.160 07:18:34 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:33.160 07:18:34 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:33.160 07:18:34 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:33.160 07:18:34 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:33.160 Cannot find device "nvmf_tgt_br" 00:12:33.160 07:18:34 -- nvmf/common.sh@154 -- # true 00:12:33.160 07:18:34 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:33.160 Cannot find device "nvmf_tgt_br2" 00:12:33.160 07:18:34 -- nvmf/common.sh@155 -- # true 00:12:33.160 07:18:34 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:33.160 07:18:34 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:33.160 Cannot find device "nvmf_tgt_br" 00:12:33.160 07:18:34 -- nvmf/common.sh@157 -- # true 00:12:33.160 07:18:34 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:33.160 Cannot find device "nvmf_tgt_br2" 00:12:33.160 07:18:34 -- nvmf/common.sh@158 -- # true 00:12:33.160 07:18:34 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:33.160 07:18:34 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:33.160 07:18:34 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:33.160 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:33.160 07:18:34 -- nvmf/common.sh@161 -- # true 00:12:33.160 07:18:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:33.160 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:33.160 07:18:34 -- nvmf/common.sh@162 -- # true 00:12:33.160 07:18:34 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:33.160 07:18:34 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:33.418 07:18:35 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:33.418 07:18:35 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:33.418 07:18:35 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:33.418 07:18:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:33.418 07:18:35 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:33.418 07:18:35 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:33.418 07:18:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:33.418 07:18:35 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:33.418 07:18:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:33.418 07:18:35 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:33.418 07:18:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:33.418 07:18:35 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:33.418 07:18:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:33.418 07:18:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:33.418 07:18:35 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:33.418 07:18:35 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:33.418 07:18:35 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:33.418 07:18:35 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:33.418 07:18:35 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:33.418 07:18:35 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:33.418 07:18:35 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:33.418 07:18:35 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:33.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:33.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:12:33.419 00:12:33.419 --- 10.0.0.2 ping statistics --- 00:12:33.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.419 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:12:33.419 07:18:35 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:33.419 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:33.419 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:12:33.419 00:12:33.419 --- 10.0.0.3 ping statistics --- 00:12:33.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.419 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:12:33.419 07:18:35 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:33.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:33.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:12:33.419 00:12:33.419 --- 10.0.0.1 ping statistics --- 00:12:33.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.419 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:12:33.419 07:18:35 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:33.419 07:18:35 -- nvmf/common.sh@421 -- # return 0 00:12:33.419 07:18:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:33.419 07:18:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:33.419 07:18:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:33.419 07:18:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:33.419 07:18:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:33.419 07:18:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:33.419 07:18:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:33.419 07:18:35 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:33.419 07:18:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:33.419 07:18:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:33.419 07:18:35 -- common/autotest_common.sh@10 -- # set +x 00:12:33.419 07:18:35 -- nvmf/common.sh@469 -- # nvmfpid=78891 00:12:33.419 07:18:35 -- nvmf/common.sh@470 -- # waitforlisten 78891 00:12:33.419 07:18:35 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:33.419 07:18:35 -- common/autotest_common.sh@819 -- # '[' -z 78891 ']' 00:12:33.419 07:18:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.419 07:18:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:33.419 07:18:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.419 07:18:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:33.419 07:18:35 -- common/autotest_common.sh@10 -- # set +x 00:12:33.419 [2024-11-04 07:18:35.222443] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:12:33.419 [2024-11-04 07:18:35.222661] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:33.677 [2024-11-04 07:18:35.360909] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:33.677 [2024-11-04 07:18:35.435019] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:33.677 [2024-11-04 07:18:35.435400] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:33.677 [2024-11-04 07:18:35.435541] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:33.677 [2024-11-04 07:18:35.435788] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:33.677 [2024-11-04 07:18:35.438947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.677 [2024-11-04 07:18:35.439541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:33.677 [2024-11-04 07:18:35.439555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:34.613 07:18:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:34.613 07:18:36 -- common/autotest_common.sh@852 -- # return 0 00:12:34.613 07:18:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:34.613 07:18:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:34.613 07:18:36 -- common/autotest_common.sh@10 -- # set +x 00:12:34.613 07:18:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:34.613 07:18:36 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:34.613 07:18:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:34.613 07:18:36 -- common/autotest_common.sh@10 -- # set +x 00:12:34.613 [2024-11-04 07:18:36.292078] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:34.613 07:18:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:34.613 07:18:36 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:34.613 07:18:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:34.613 07:18:36 -- common/autotest_common.sh@10 -- # set +x 00:12:34.613 Malloc0 00:12:34.613 07:18:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:34.613 07:18:36 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:34.613 07:18:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:34.613 07:18:36 -- common/autotest_common.sh@10 -- # set +x 00:12:34.613 Delay0 00:12:34.613 07:18:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:34.613 07:18:36 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:34.613 07:18:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:34.613 07:18:36 -- common/autotest_common.sh@10 -- # set +x 00:12:34.613 07:18:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:34.613 07:18:36 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:34.613 07:18:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:34.613 07:18:36 -- common/autotest_common.sh@10 -- # set +x 00:12:34.613 07:18:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:34.613 07:18:36 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:34.613 07:18:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:34.613 07:18:36 -- common/autotest_common.sh@10 -- # set +x 00:12:34.613 [2024-11-04 07:18:36.368171] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.613 07:18:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:34.613 07:18:36 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:34.613 07:18:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:34.613 07:18:36 -- common/autotest_common.sh@10 -- # set +x 00:12:34.613 07:18:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:34.613 07:18:36 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:34.872 [2024-11-04 07:18:36.554335] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:36.775 Initializing NVMe Controllers 00:12:36.775 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:36.775 controller IO queue size 128 less than required 00:12:36.775 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:36.775 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:36.775 Initialization complete. Launching workers. 00:12:36.775 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 38720 00:12:36.775 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38781, failed to submit 62 00:12:36.775 success 38720, unsuccess 61, failed 0 00:12:36.775 07:18:38 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:36.775 07:18:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:36.775 07:18:38 -- common/autotest_common.sh@10 -- # set +x 00:12:36.775 07:18:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:36.775 07:18:38 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:36.775 07:18:38 -- target/abort.sh@38 -- # nvmftestfini 00:12:36.775 07:18:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:36.775 07:18:38 -- nvmf/common.sh@116 -- # sync 00:12:37.034 07:18:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:37.034 07:18:38 -- nvmf/common.sh@119 -- # set +e 00:12:37.034 07:18:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:37.034 07:18:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:37.034 rmmod nvme_tcp 00:12:37.034 rmmod nvme_fabrics 00:12:37.034 rmmod nvme_keyring 00:12:37.034 07:18:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:37.034 07:18:38 -- nvmf/common.sh@123 -- # set -e 00:12:37.034 07:18:38 -- nvmf/common.sh@124 -- # return 0 00:12:37.034 07:18:38 -- nvmf/common.sh@477 -- # '[' -n 78891 ']' 00:12:37.034 07:18:38 -- nvmf/common.sh@478 -- # killprocess 78891 00:12:37.034 07:18:38 -- common/autotest_common.sh@926 -- # '[' -z 78891 ']' 00:12:37.034 07:18:38 -- common/autotest_common.sh@930 -- # kill -0 78891 00:12:37.034 07:18:38 -- common/autotest_common.sh@931 -- # uname 00:12:37.034 07:18:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:37.034 07:18:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78891 00:12:37.034 killing process with pid 78891 00:12:37.034 07:18:38 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:37.034 07:18:38 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:37.034 07:18:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78891' 00:12:37.034 07:18:38 -- common/autotest_common.sh@945 -- # kill 78891 00:12:37.034 07:18:38 -- common/autotest_common.sh@950 -- # wait 78891 00:12:37.293 07:18:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:37.293 07:18:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:37.293 07:18:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:37.293 07:18:38 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:37.293 07:18:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:37.293 07:18:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.293 07:18:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:37.293 07:18:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.293 07:18:38 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:37.293 00:12:37.293 real 0m4.233s 00:12:37.293 user 0m12.431s 00:12:37.293 sys 0m1.013s 00:12:37.293 07:18:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:37.293 ************************************ 00:12:37.293 END TEST nvmf_abort 00:12:37.293 ************************************ 00:12:37.293 07:18:39 -- common/autotest_common.sh@10 -- # set +x 00:12:37.293 07:18:39 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:37.293 07:18:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:37.293 07:18:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:37.293 07:18:39 -- common/autotest_common.sh@10 -- # set +x 00:12:37.293 ************************************ 00:12:37.293 START TEST nvmf_ns_hotplug_stress 00:12:37.293 ************************************ 00:12:37.293 07:18:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:37.293 * Looking for test storage... 00:12:37.552 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:37.552 07:18:39 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:37.552 07:18:39 -- nvmf/common.sh@7 -- # uname -s 00:12:37.552 07:18:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.552 07:18:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.552 07:18:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.552 07:18:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.552 07:18:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.552 07:18:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.552 07:18:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.552 07:18:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.552 07:18:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.552 07:18:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.552 07:18:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:12:37.552 07:18:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:12:37.552 07:18:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.552 07:18:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.552 07:18:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:37.552 07:18:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:37.552 07:18:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.552 07:18:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.552 07:18:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.553 07:18:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.553 07:18:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.553 07:18:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.553 07:18:39 -- paths/export.sh@5 -- # export PATH 00:12:37.553 07:18:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.553 07:18:39 -- nvmf/common.sh@46 -- # : 0 00:12:37.553 07:18:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:37.553 07:18:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:37.553 07:18:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:37.553 07:18:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.553 07:18:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.553 07:18:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:37.553 07:18:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:37.553 07:18:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:37.553 07:18:39 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:37.553 07:18:39 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:12:37.553 07:18:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:37.553 07:18:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:37.553 07:18:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:37.553 07:18:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:37.553 07:18:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:37.553 07:18:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.553 07:18:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:37.553 07:18:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.553 07:18:39 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:37.553 07:18:39 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:37.553 07:18:39 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:37.553 07:18:39 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:37.553 07:18:39 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:37.553 07:18:39 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:37.553 07:18:39 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:37.553 07:18:39 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:37.553 07:18:39 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:37.553 07:18:39 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:37.553 07:18:39 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:37.553 07:18:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:37.553 07:18:39 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:37.553 07:18:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:37.553 07:18:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:37.553 07:18:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:37.553 07:18:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:37.553 07:18:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:37.553 07:18:39 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:37.553 07:18:39 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:37.553 Cannot find device "nvmf_tgt_br" 00:12:37.553 07:18:39 -- nvmf/common.sh@154 -- # true 00:12:37.553 07:18:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:37.553 Cannot find device "nvmf_tgt_br2" 00:12:37.553 07:18:39 -- nvmf/common.sh@155 -- # true 00:12:37.553 07:18:39 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:37.553 07:18:39 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:37.553 Cannot find device "nvmf_tgt_br" 00:12:37.553 07:18:39 -- nvmf/common.sh@157 -- # true 00:12:37.553 07:18:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:37.553 Cannot find device "nvmf_tgt_br2" 00:12:37.553 07:18:39 -- nvmf/common.sh@158 -- # true 00:12:37.553 07:18:39 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:37.553 07:18:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:37.553 07:18:39 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:37.553 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:37.553 07:18:39 -- nvmf/common.sh@161 -- # true 00:12:37.553 07:18:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:37.553 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:37.553 07:18:39 -- nvmf/common.sh@162 -- # true 00:12:37.553 07:18:39 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:37.553 07:18:39 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:37.553 07:18:39 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:37.553 07:18:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:37.553 07:18:39 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:37.553 07:18:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:37.553 07:18:39 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:37.553 07:18:39 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:37.553 07:18:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:37.553 07:18:39 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:37.553 07:18:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:37.553 07:18:39 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:37.553 07:18:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:37.553 07:18:39 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:37.812 07:18:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:37.812 07:18:39 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:37.812 07:18:39 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:37.812 07:18:39 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:37.812 07:18:39 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:37.812 07:18:39 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:37.812 07:18:39 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:37.812 07:18:39 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:37.812 07:18:39 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:37.812 07:18:39 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:37.812 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:37.812 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:12:37.812 00:12:37.812 --- 10.0.0.2 ping statistics --- 00:12:37.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.812 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:12:37.812 07:18:39 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:37.812 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:37.812 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:12:37.812 00:12:37.812 --- 10.0.0.3 ping statistics --- 00:12:37.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.812 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:12:37.812 07:18:39 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:37.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:37.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:12:37.812 00:12:37.812 --- 10.0.0.1 ping statistics --- 00:12:37.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.812 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:12:37.812 07:18:39 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:37.812 07:18:39 -- nvmf/common.sh@421 -- # return 0 00:12:37.812 07:18:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:37.812 07:18:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:37.812 07:18:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:37.812 07:18:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:37.812 07:18:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:37.812 07:18:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:37.812 07:18:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:37.812 07:18:39 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:12:37.812 07:18:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:37.812 07:18:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:37.812 07:18:39 -- common/autotest_common.sh@10 -- # set +x 00:12:37.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.812 07:18:39 -- nvmf/common.sh@469 -- # nvmfpid=79158 00:12:37.812 07:18:39 -- nvmf/common.sh@470 -- # waitforlisten 79158 00:12:37.812 07:18:39 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:37.812 07:18:39 -- common/autotest_common.sh@819 -- # '[' -z 79158 ']' 00:12:37.812 07:18:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.812 07:18:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:37.812 07:18:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.812 07:18:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:37.812 07:18:39 -- common/autotest_common.sh@10 -- # set +x 00:12:37.812 [2024-11-04 07:18:39.558305] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:12:37.812 [2024-11-04 07:18:39.559114] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.070 [2024-11-04 07:18:39.703271] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:38.070 [2024-11-04 07:18:39.775590] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:38.070 [2024-11-04 07:18:39.776068] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:38.070 [2024-11-04 07:18:39.776272] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:38.070 [2024-11-04 07:18:39.776301] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:38.070 [2024-11-04 07:18:39.776478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:38.070 [2024-11-04 07:18:39.776704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:38.070 [2024-11-04 07:18:39.776718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.005 07:18:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:39.005 07:18:40 -- common/autotest_common.sh@852 -- # return 0 00:12:39.005 07:18:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:39.005 07:18:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:39.005 07:18:40 -- common/autotest_common.sh@10 -- # set +x 00:12:39.005 07:18:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.005 07:18:40 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:12:39.005 07:18:40 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:39.263 [2024-11-04 07:18:40.901703] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:39.263 07:18:40 -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:39.520 07:18:41 -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:39.779 [2024-11-04 07:18:41.388378] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:39.779 07:18:41 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:40.038 07:18:41 -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:12:40.038 Malloc0 00:12:40.038 07:18:41 -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:40.295 Delay0 00:12:40.295 07:18:42 -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:40.554 07:18:42 -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:40.812 NULL1 00:12:40.812 07:18:42 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:41.071 07:18:42 -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:41.071 07:18:42 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=79289 00:12:41.071 07:18:42 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79289 00:12:41.071 07:18:42 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.447 Read completed with error (sct=0, sc=11) 00:12:42.447 07:18:43 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:42.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:42.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:42.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:42.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:42.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:42.447 07:18:44 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:12:42.447 07:18:44 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:42.706 true 00:12:42.706 07:18:44 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79289 00:12:42.706 07:18:44 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.641 07:18:45 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:43.900 07:18:45 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:12:43.900 07:18:45 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:12:43.900 true 00:12:43.900 07:18:45 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79289 00:12:43.900 07:18:45 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.158 07:18:45 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:44.416 07:18:46 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:12:44.416 07:18:46 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:12:44.675 true 00:12:44.675 07:18:46 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79289 00:12:44.675 07:18:46 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.610 07:18:47 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:45.868 07:18:47 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:12:45.868 07:18:47 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:12:46.127 true 00:12:46.127 07:18:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79289 00:12:46.127 07:18:47 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.386 07:18:48 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:46.670 07:18:48 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:12:46.670 07:18:48 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:12:46.670 true 00:12:46.670 07:18:48 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79289 00:12:46.670 07:18:48 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.608 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:47.608 07:18:49 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:47.867 07:18:49 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:12:47.867 07:18:49 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:12:48.125 true 00:12:48.125 07:18:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79289 00:12:48.125 07:18:49 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.384 07:18:50 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:48.642 07:18:50 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:12:48.642 07:18:50 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:12:48.900 true 00:12:48.900 07:18:50 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79289 00:12:48.900 07:18:50 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.159 07:18:50 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:49.417 07:18:51 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:12:49.417 07:18:51 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:49.675 true 00:12:49.675 07:18:51 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79289 00:12:49.675 07:18:51 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:50.610 07:18:52 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:50.869 07:18:52 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:12:50.869 07:18:52 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:51.127 true 00:12:51.127 07:18:52 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79289 00:12:51.127 07:18:52 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.385 07:18:52 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:51.385 07:18:53 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:12:51.385 07:18:53 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:51.644 true 00:12:51.644 07:18:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79289 00:12:51.644 07:18:53 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.579 07:18:54 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:52.838 07:18:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:12:52.838 07:18:54 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:53.096 true 00:12:53.096 07:18:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79289 00:12:53.096 07:18:54 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.354 07:18:54 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:53.612 07:18:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:12:53.612 07:18:55 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:12:53.612 true 00:12:53.871 07:18:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79289 00:12:53.871 07:18:55 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.806 07:18:56 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:54.806 07:18:56 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:12:54.806 07:18:56 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:55.064 true 00:12:55.064 07:18:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79289 00:12:55.064 07:18:56 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.323 07:18:56 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:55.323 07:18:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:12:55.323 07:18:57 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:12:55.581 true 00:12:55.581 07:18:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79289 00:12:55.581 07:18:57 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:56.518 07:18:58 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:56.776 07:18:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:12:56.776 07:18:58 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:12:57.034 true 00:12:57.034 07:18:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79289 00:12:57.034 07:18:58 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.293 07:18:59 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:57.551 07:18:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:12:57.551 07:18:59 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:12:57.810 true 00:12:57.810 07:18:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79289 00:12:57.810 07:18:59 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.747 07:19:00 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:58.747 07:19:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:12:58.747 07:19:00 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:12:59.006 true 00:12:59.006 07:19:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79289 00:12:59.006 07:19:00 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:59.266 07:19:00 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:59.525 07:19:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:12:59.525 07:19:01 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:12:59.525 true 00:12:59.784 07:19:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79289 00:12:59.784 07:19:01 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.721 07:19:02 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:00.980 07:19:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:00.980 07:19:02 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:00.980 true 00:13:00.980 07:19:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79289 00:13:00.980 07:19:02 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.239 07:19:03 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:01.508 07:19:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:01.508 07:19:03 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:01.820 true 00:13:01.820 07:19:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79289 00:13:01.820 07:19:03 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.773 07:19:04 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:03.032 07:19:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:03.032 07:19:04 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:03.032 true 00:13:03.032 07:19:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79289 00:13:03.032 07:19:04 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.291 07:19:05 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:03.550 07:19:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:03.550 07:19:05 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:03.810 true 00:13:03.810 07:19:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79289 00:13:03.810 07:19:05 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.747 07:19:06 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:04.747 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:05.006 07:19:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:05.006 07:19:06 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:05.006 true 00:13:05.006 07:19:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79289 00:13:05.006 07:19:06 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.266 07:19:07 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:05.526 07:19:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:05.526 07:19:07 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:05.785 true 00:13:05.785 07:19:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79289 00:13:05.785 07:19:07 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:06.722 07:19:08 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:06.982 07:19:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:06.982 07:19:08 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:07.241 true 00:13:07.241 07:19:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79289 00:13:07.241 07:19:08 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.500 07:19:09 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:07.500 07:19:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:07.500 07:19:09 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:07.759 true 00:13:07.759 07:19:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79289 00:13:07.759 07:19:09 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.694 07:19:10 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:08.954 07:19:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:08.954 07:19:10 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:09.213 true 00:13:09.213 07:19:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79289 00:13:09.213 07:19:10 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.472 07:19:11 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:09.730 07:19:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:09.730 07:19:11 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:09.989 true 00:13:09.989 07:19:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79289 00:13:09.989 07:19:11 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.928 07:19:12 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:10.928 07:19:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:10.928 07:19:12 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:11.187 true 00:13:11.187 07:19:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79289 00:13:11.187 07:19:12 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.187 Initializing NVMe Controllers 00:13:11.187 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:11.187 Controller IO queue size 128, less than required. 00:13:11.187 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:11.187 Controller IO queue size 128, less than required. 00:13:11.187 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:11.187 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:11.187 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:11.187 Initialization complete. Launching workers. 00:13:11.187 ======================================================== 00:13:11.187 Latency(us) 00:13:11.187 Device Information : IOPS MiB/s Average min max 00:13:11.187 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 314.58 0.15 219578.10 4243.78 1064140.03 00:13:11.187 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 13577.18 6.63 9427.20 2608.42 452886.76 00:13:11.187 ======================================================== 00:13:11.187 Total : 13891.76 6.78 14186.12 2608.42 1064140.03 00:13:11.187 00:13:11.446 07:19:13 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:11.705 07:19:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:13:11.705 07:19:13 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:13:11.964 true 00:13:11.964 07:19:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79289 00:13:11.964 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (79289) - No such process 00:13:11.964 07:19:13 -- target/ns_hotplug_stress.sh@53 -- # wait 79289 00:13:11.964 07:19:13 -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.964 07:19:13 -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:12.533 07:19:14 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:12.533 07:19:14 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:12.533 07:19:14 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:12.533 07:19:14 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:12.533 07:19:14 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:12.533 null0 00:13:12.533 07:19:14 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:12.533 07:19:14 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:12.533 07:19:14 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:12.792 null1 00:13:12.792 07:19:14 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:12.792 07:19:14 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:12.792 07:19:14 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:13.051 null2 00:13:13.051 07:19:14 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:13.051 07:19:14 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:13.051 07:19:14 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:13.311 null3 00:13:13.311 07:19:14 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:13.311 07:19:14 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:13.311 07:19:14 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:13.311 null4 00:13:13.311 07:19:15 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:13.311 07:19:15 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:13.311 07:19:15 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:13.570 null5 00:13:13.570 07:19:15 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:13.570 07:19:15 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:13.570 07:19:15 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:13.829 null6 00:13:13.829 07:19:15 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:13.829 07:19:15 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:13.829 07:19:15 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:14.089 null7 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@66 -- # wait 80337 80338 80340 80343 80345 80346 80348 80349 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.089 07:19:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:14.348 07:19:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:14.349 07:19:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:14.349 07:19:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:14.349 07:19:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:14.349 07:19:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:14.349 07:19:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:14.349 07:19:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.349 07:19:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:14.608 07:19:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.608 07:19:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.608 07:19:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:14.608 07:19:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.608 07:19:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.608 07:19:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:14.608 07:19:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.608 07:19:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.608 07:19:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:14.608 07:19:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.608 07:19:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.608 07:19:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:14.608 07:19:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.608 07:19:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.608 07:19:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:14.608 07:19:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.608 07:19:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.608 07:19:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:14.608 07:19:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.608 07:19:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.608 07:19:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:14.608 07:19:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.608 07:19:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.608 07:19:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:14.868 07:19:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:14.868 07:19:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:14.868 07:19:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:14.868 07:19:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:14.868 07:19:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:14.868 07:19:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:14.868 07:19:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.868 07:19:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:15.127 07:19:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.127 07:19:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.127 07:19:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:15.127 07:19:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.127 07:19:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.127 07:19:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:15.127 07:19:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.127 07:19:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.127 07:19:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:15.127 07:19:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.127 07:19:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.127 07:19:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:15.127 07:19:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.127 07:19:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.127 07:19:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:15.127 07:19:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.127 07:19:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.127 07:19:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:15.127 07:19:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.127 07:19:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.127 07:19:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:15.127 07:19:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.127 07:19:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.127 07:19:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:15.386 07:19:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:15.386 07:19:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:15.386 07:19:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:15.386 07:19:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:15.386 07:19:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:15.386 07:19:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:15.386 07:19:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.386 07:19:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:15.645 07:19:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.645 07:19:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.645 07:19:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:15.645 07:19:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.645 07:19:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.645 07:19:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:15.645 07:19:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.645 07:19:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.645 07:19:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:15.645 07:19:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.645 07:19:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.645 07:19:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:15.645 07:19:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.645 07:19:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.645 07:19:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:15.645 07:19:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.645 07:19:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.645 07:19:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:15.645 07:19:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.645 07:19:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.645 07:19:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:15.645 07:19:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.645 07:19:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.645 07:19:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:15.904 07:19:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:15.904 07:19:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:15.904 07:19:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:15.904 07:19:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:15.904 07:19:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:15.904 07:19:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:15.904 07:19:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.904 07:19:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:16.188 07:19:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.188 07:19:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.188 07:19:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:16.188 07:19:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.188 07:19:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.188 07:19:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:16.188 07:19:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.188 07:19:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.188 07:19:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:16.188 07:19:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.188 07:19:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.188 07:19:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:16.188 07:19:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.188 07:19:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.188 07:19:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:16.188 07:19:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.188 07:19:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.188 07:19:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:16.188 07:19:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.189 07:19:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.189 07:19:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:16.189 07:19:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.189 07:19:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.189 07:19:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:16.452 07:19:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:16.452 07:19:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:16.452 07:19:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:16.452 07:19:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:16.452 07:19:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:16.452 07:19:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.452 07:19:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:16.452 07:19:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:16.452 07:19:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.452 07:19:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.452 07:19:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:16.452 07:19:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.452 07:19:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.452 07:19:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:16.710 07:19:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.710 07:19:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.710 07:19:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:16.710 07:19:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.711 07:19:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.711 07:19:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:16.711 07:19:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.711 07:19:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.711 07:19:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:16.711 07:19:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.711 07:19:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.711 07:19:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:16.711 07:19:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.711 07:19:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.711 07:19:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:16.711 07:19:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:16.711 07:19:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.711 07:19:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.711 07:19:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:16.711 07:19:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:16.969 07:19:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:16.969 07:19:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:16.969 07:19:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:16.969 07:19:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.969 07:19:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:16.969 07:19:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.969 07:19:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.969 07:19:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:16.969 07:19:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:16.969 07:19:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.969 07:19:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.969 07:19:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:16.969 07:19:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.969 07:19:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.969 07:19:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:17.228 07:19:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.228 07:19:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.228 07:19:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:17.228 07:19:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.228 07:19:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.228 07:19:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:17.228 07:19:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.228 07:19:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.228 07:19:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:17.228 07:19:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.228 07:19:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.228 07:19:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:17.228 07:19:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:17.228 07:19:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.228 07:19:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.228 07:19:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:17.228 07:19:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:17.228 07:19:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:17.487 07:19:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:17.487 07:19:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:17.487 07:19:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.487 07:19:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.487 07:19:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.487 07:19:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:17.487 07:19:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:17.487 07:19:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:17.487 07:19:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.487 07:19:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.487 07:19:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:17.487 07:19:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.487 07:19:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.487 07:19:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:17.745 07:19:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.745 07:19:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.745 07:19:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:17.745 07:19:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.745 07:19:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.745 07:19:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:17.745 07:19:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.745 07:19:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.745 07:19:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:17.745 07:19:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:17.745 07:19:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.745 07:19:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.745 07:19:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:17.745 07:19:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.745 07:19:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.745 07:19:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:17.745 07:19:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:18.012 07:19:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:18.012 07:19:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:18.012 07:19:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:18.012 07:19:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.012 07:19:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.012 07:19:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:18.012 07:19:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:18.012 07:19:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.012 07:19:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:18.012 07:19:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.012 07:19:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.012 07:19:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:18.012 07:19:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.012 07:19:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.012 07:19:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:18.275 07:19:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.275 07:19:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.275 07:19:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:18.275 07:19:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.275 07:19:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.275 07:19:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:18.275 07:19:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:18.275 07:19:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.275 07:19:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.275 07:19:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:18.275 07:19:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.275 07:19:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.275 07:19:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:18.275 07:19:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:18.275 07:19:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:18.275 07:19:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.275 07:19:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.275 07:19:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:18.275 07:19:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:18.533 07:19:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:18.533 07:19:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.533 07:19:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.533 07:19:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:18.534 07:19:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:18.534 07:19:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:18.534 07:19:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.534 07:19:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.534 07:19:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:18.534 07:19:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.534 07:19:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.534 07:19:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:18.534 07:19:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.534 07:19:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.534 07:19:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:18.534 07:19:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.792 07:19:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.792 07:19:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.792 07:19:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:18.792 07:19:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:18.792 07:19:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.792 07:19:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.792 07:19:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:18.792 07:19:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.792 07:19:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.792 07:19:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:18.792 07:19:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:18.792 07:19:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:18.792 07:19:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:18.792 07:19:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.792 07:19:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.792 07:19:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:19.051 07:19:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:19.051 07:19:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.051 07:19:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.051 07:19:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:19.051 07:19:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.051 07:19:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.051 07:19:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.051 07:19:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.051 07:19:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:19.051 07:19:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.051 07:19:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.051 07:19:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.309 07:19:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.309 07:19:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.309 07:19:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.309 07:19:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.309 07:19:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.309 07:19:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.309 07:19:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.309 07:19:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.309 07:19:21 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:19.309 07:19:21 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:19.309 07:19:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:19.309 07:19:21 -- nvmf/common.sh@116 -- # sync 00:13:19.309 07:19:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:19.309 07:19:21 -- nvmf/common.sh@119 -- # set +e 00:13:19.309 07:19:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:19.309 07:19:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:19.309 rmmod nvme_tcp 00:13:19.309 rmmod nvme_fabrics 00:13:19.309 rmmod nvme_keyring 00:13:19.309 07:19:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:19.309 07:19:21 -- nvmf/common.sh@123 -- # set -e 00:13:19.309 07:19:21 -- nvmf/common.sh@124 -- # return 0 00:13:19.309 07:19:21 -- nvmf/common.sh@477 -- # '[' -n 79158 ']' 00:13:19.309 07:19:21 -- nvmf/common.sh@478 -- # killprocess 79158 00:13:19.309 07:19:21 -- common/autotest_common.sh@926 -- # '[' -z 79158 ']' 00:13:19.309 07:19:21 -- common/autotest_common.sh@930 -- # kill -0 79158 00:13:19.309 07:19:21 -- common/autotest_common.sh@931 -- # uname 00:13:19.309 07:19:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:19.309 07:19:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79158 00:13:19.568 killing process with pid 79158 00:13:19.568 07:19:21 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:19.568 07:19:21 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:19.568 07:19:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79158' 00:13:19.568 07:19:21 -- common/autotest_common.sh@945 -- # kill 79158 00:13:19.568 07:19:21 -- common/autotest_common.sh@950 -- # wait 79158 00:13:19.828 07:19:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:19.828 07:19:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:19.828 07:19:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:19.828 07:19:21 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:19.828 07:19:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:19.828 07:19:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.828 07:19:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:19.828 07:19:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.828 07:19:21 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:19.828 00:13:19.828 real 0m42.390s 00:13:19.828 user 3m21.159s 00:13:19.828 sys 0m11.855s 00:13:19.828 07:19:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:19.828 07:19:21 -- common/autotest_common.sh@10 -- # set +x 00:13:19.828 ************************************ 00:13:19.828 END TEST nvmf_ns_hotplug_stress 00:13:19.828 ************************************ 00:13:19.828 07:19:21 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:19.828 07:19:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:19.828 07:19:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:19.828 07:19:21 -- common/autotest_common.sh@10 -- # set +x 00:13:19.828 ************************************ 00:13:19.828 START TEST nvmf_connect_stress 00:13:19.828 ************************************ 00:13:19.828 07:19:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:19.828 * Looking for test storage... 00:13:19.828 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:19.828 07:19:21 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:19.828 07:19:21 -- nvmf/common.sh@7 -- # uname -s 00:13:19.828 07:19:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:19.828 07:19:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:19.828 07:19:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:19.828 07:19:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:19.828 07:19:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:19.828 07:19:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:19.828 07:19:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:19.828 07:19:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:19.828 07:19:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:19.828 07:19:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:19.828 07:19:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:13:19.828 07:19:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:13:19.828 07:19:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:19.828 07:19:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:19.828 07:19:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:19.828 07:19:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:19.828 07:19:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:19.828 07:19:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:19.828 07:19:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:19.828 07:19:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.828 07:19:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.828 07:19:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.828 07:19:21 -- paths/export.sh@5 -- # export PATH 00:13:19.828 07:19:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.828 07:19:21 -- nvmf/common.sh@46 -- # : 0 00:13:19.828 07:19:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:19.828 07:19:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:19.828 07:19:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:19.828 07:19:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:19.828 07:19:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:19.828 07:19:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:19.828 07:19:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:19.828 07:19:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:19.828 07:19:21 -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:19.828 07:19:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:19.828 07:19:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:19.828 07:19:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:19.828 07:19:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:19.828 07:19:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:19.828 07:19:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.828 07:19:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:19.828 07:19:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.828 07:19:21 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:19.828 07:19:21 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:19.828 07:19:21 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:19.828 07:19:21 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:19.828 07:19:21 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:19.828 07:19:21 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:19.828 07:19:21 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:19.828 07:19:21 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:19.828 07:19:21 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:19.828 07:19:21 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:19.828 07:19:21 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:19.828 07:19:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:19.828 07:19:21 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:19.828 07:19:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:19.828 07:19:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:19.828 07:19:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:19.828 07:19:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:19.828 07:19:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:19.828 07:19:21 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:19.829 07:19:21 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:19.829 Cannot find device "nvmf_tgt_br" 00:13:19.829 07:19:21 -- nvmf/common.sh@154 -- # true 00:13:19.829 07:19:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:19.829 Cannot find device "nvmf_tgt_br2" 00:13:19.829 07:19:21 -- nvmf/common.sh@155 -- # true 00:13:19.829 07:19:21 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:19.829 07:19:21 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:19.829 Cannot find device "nvmf_tgt_br" 00:13:19.829 07:19:21 -- nvmf/common.sh@157 -- # true 00:13:19.829 07:19:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:19.829 Cannot find device "nvmf_tgt_br2" 00:13:19.829 07:19:21 -- nvmf/common.sh@158 -- # true 00:13:19.829 07:19:21 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:20.088 07:19:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:20.088 07:19:21 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:20.088 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:20.088 07:19:21 -- nvmf/common.sh@161 -- # true 00:13:20.088 07:19:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:20.088 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:20.088 07:19:21 -- nvmf/common.sh@162 -- # true 00:13:20.088 07:19:21 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:20.088 07:19:21 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:20.088 07:19:21 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:20.088 07:19:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:20.088 07:19:21 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:20.088 07:19:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:20.088 07:19:21 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:20.088 07:19:21 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:20.088 07:19:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:20.088 07:19:21 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:20.088 07:19:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:20.088 07:19:21 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:20.088 07:19:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:20.088 07:19:21 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:20.088 07:19:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:20.088 07:19:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:20.088 07:19:21 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:20.088 07:19:21 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:20.088 07:19:21 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:20.088 07:19:21 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:20.088 07:19:21 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:20.088 07:19:21 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:20.088 07:19:21 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:20.088 07:19:21 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:20.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:20.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:13:20.088 00:13:20.088 --- 10.0.0.2 ping statistics --- 00:13:20.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.088 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:13:20.088 07:19:21 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:20.088 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:20.088 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:13:20.088 00:13:20.088 --- 10.0.0.3 ping statistics --- 00:13:20.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.088 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:13:20.088 07:19:21 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:20.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:20.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:13:20.088 00:13:20.088 --- 10.0.0.1 ping statistics --- 00:13:20.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.088 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:13:20.088 07:19:21 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:20.088 07:19:21 -- nvmf/common.sh@421 -- # return 0 00:13:20.088 07:19:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:20.088 07:19:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:20.088 07:19:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:20.088 07:19:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:20.088 07:19:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:20.088 07:19:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:20.088 07:19:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:20.347 07:19:21 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:20.347 07:19:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:20.347 07:19:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:20.347 07:19:21 -- common/autotest_common.sh@10 -- # set +x 00:13:20.347 07:19:21 -- nvmf/common.sh@469 -- # nvmfpid=81656 00:13:20.347 07:19:21 -- nvmf/common.sh@470 -- # waitforlisten 81656 00:13:20.347 07:19:21 -- common/autotest_common.sh@819 -- # '[' -z 81656 ']' 00:13:20.347 07:19:21 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:20.347 07:19:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.347 07:19:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:20.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.347 07:19:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.347 07:19:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:20.347 07:19:21 -- common/autotest_common.sh@10 -- # set +x 00:13:20.347 [2024-11-04 07:19:21.995515] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:20.347 [2024-11-04 07:19:21.995612] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.347 [2024-11-04 07:19:22.127746] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:20.606 [2024-11-04 07:19:22.211315] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:20.606 [2024-11-04 07:19:22.211471] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.606 [2024-11-04 07:19:22.211484] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.606 [2024-11-04 07:19:22.211493] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.606 [2024-11-04 07:19:22.212064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.606 [2024-11-04 07:19:22.212388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:20.606 [2024-11-04 07:19:22.212420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.174 07:19:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:21.174 07:19:22 -- common/autotest_common.sh@852 -- # return 0 00:13:21.174 07:19:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:21.174 07:19:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:21.174 07:19:22 -- common/autotest_common.sh@10 -- # set +x 00:13:21.174 07:19:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:21.174 07:19:22 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:21.174 07:19:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.174 07:19:22 -- common/autotest_common.sh@10 -- # set +x 00:13:21.174 [2024-11-04 07:19:23.002587] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:21.174 07:19:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.174 07:19:23 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:21.174 07:19:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.174 07:19:23 -- common/autotest_common.sh@10 -- # set +x 00:13:21.433 07:19:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.433 07:19:23 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:21.434 07:19:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.434 07:19:23 -- common/autotest_common.sh@10 -- # set +x 00:13:21.434 [2024-11-04 07:19:23.020723] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.434 07:19:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.434 07:19:23 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:21.434 07:19:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.434 07:19:23 -- common/autotest_common.sh@10 -- # set +x 00:13:21.434 NULL1 00:13:21.434 07:19:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.434 07:19:23 -- target/connect_stress.sh@21 -- # PERF_PID=81708 00:13:21.434 07:19:23 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:21.434 07:19:23 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:21.434 07:19:23 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:21.434 07:19:23 -- target/connect_stress.sh@27 -- # seq 1 20 00:13:21.434 07:19:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.434 07:19:23 -- target/connect_stress.sh@28 -- # cat 00:13:21.434 07:19:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.434 07:19:23 -- target/connect_stress.sh@28 -- # cat 00:13:21.434 07:19:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.434 07:19:23 -- target/connect_stress.sh@28 -- # cat 00:13:21.434 07:19:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.434 07:19:23 -- target/connect_stress.sh@28 -- # cat 00:13:21.434 07:19:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.434 07:19:23 -- target/connect_stress.sh@28 -- # cat 00:13:21.434 07:19:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.434 07:19:23 -- target/connect_stress.sh@28 -- # cat 00:13:21.434 07:19:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.434 07:19:23 -- target/connect_stress.sh@28 -- # cat 00:13:21.434 07:19:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.434 07:19:23 -- target/connect_stress.sh@28 -- # cat 00:13:21.434 07:19:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.434 07:19:23 -- target/connect_stress.sh@28 -- # cat 00:13:21.434 07:19:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.434 07:19:23 -- target/connect_stress.sh@28 -- # cat 00:13:21.434 07:19:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.434 07:19:23 -- target/connect_stress.sh@28 -- # cat 00:13:21.434 07:19:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.434 07:19:23 -- target/connect_stress.sh@28 -- # cat 00:13:21.434 07:19:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.434 07:19:23 -- target/connect_stress.sh@28 -- # cat 00:13:21.434 07:19:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.434 07:19:23 -- target/connect_stress.sh@28 -- # cat 00:13:21.434 07:19:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.434 07:19:23 -- target/connect_stress.sh@28 -- # cat 00:13:21.434 07:19:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.434 07:19:23 -- target/connect_stress.sh@28 -- # cat 00:13:21.434 07:19:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.434 07:19:23 -- target/connect_stress.sh@28 -- # cat 00:13:21.434 07:19:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.434 07:19:23 -- target/connect_stress.sh@28 -- # cat 00:13:21.434 07:19:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.434 07:19:23 -- target/connect_stress.sh@28 -- # cat 00:13:21.434 07:19:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.434 07:19:23 -- target/connect_stress.sh@28 -- # cat 00:13:21.434 07:19:23 -- target/connect_stress.sh@34 -- # kill -0 81708 00:13:21.434 07:19:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.434 07:19:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.434 07:19:23 -- common/autotest_common.sh@10 -- # set +x 00:13:21.693 07:19:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.693 07:19:23 -- target/connect_stress.sh@34 -- # kill -0 81708 00:13:21.693 07:19:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.693 07:19:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.693 07:19:23 -- common/autotest_common.sh@10 -- # set +x 00:13:21.952 07:19:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.952 07:19:23 -- target/connect_stress.sh@34 -- # kill -0 81708 00:13:21.952 07:19:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.952 07:19:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.952 07:19:23 -- common/autotest_common.sh@10 -- # set +x 00:13:22.520 07:19:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.520 07:19:24 -- target/connect_stress.sh@34 -- # kill -0 81708 00:13:22.520 07:19:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.520 07:19:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.520 07:19:24 -- common/autotest_common.sh@10 -- # set +x 00:13:22.779 07:19:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.779 07:19:24 -- target/connect_stress.sh@34 -- # kill -0 81708 00:13:22.779 07:19:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.779 07:19:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.779 07:19:24 -- common/autotest_common.sh@10 -- # set +x 00:13:23.038 07:19:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.038 07:19:24 -- target/connect_stress.sh@34 -- # kill -0 81708 00:13:23.038 07:19:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.038 07:19:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.038 07:19:24 -- common/autotest_common.sh@10 -- # set +x 00:13:23.297 07:19:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.297 07:19:25 -- target/connect_stress.sh@34 -- # kill -0 81708 00:13:23.297 07:19:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.297 07:19:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.297 07:19:25 -- common/autotest_common.sh@10 -- # set +x 00:13:23.556 07:19:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.556 07:19:25 -- target/connect_stress.sh@34 -- # kill -0 81708 00:13:23.556 07:19:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.556 07:19:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.556 07:19:25 -- common/autotest_common.sh@10 -- # set +x 00:13:24.124 07:19:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:24.124 07:19:25 -- target/connect_stress.sh@34 -- # kill -0 81708 00:13:24.124 07:19:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.124 07:19:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:24.124 07:19:25 -- common/autotest_common.sh@10 -- # set +x 00:13:24.383 07:19:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:24.383 07:19:26 -- target/connect_stress.sh@34 -- # kill -0 81708 00:13:24.383 07:19:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.383 07:19:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:24.383 07:19:26 -- common/autotest_common.sh@10 -- # set +x 00:13:24.643 07:19:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:24.643 07:19:26 -- target/connect_stress.sh@34 -- # kill -0 81708 00:13:24.643 07:19:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.643 07:19:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:24.643 07:19:26 -- common/autotest_common.sh@10 -- # set +x 00:13:24.902 07:19:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:24.902 07:19:26 -- target/connect_stress.sh@34 -- # kill -0 81708 00:13:24.902 07:19:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.902 07:19:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:24.902 07:19:26 -- common/autotest_common.sh@10 -- # set +x 00:13:25.161 07:19:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:25.161 07:19:26 -- target/connect_stress.sh@34 -- # kill -0 81708 00:13:25.161 07:19:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.161 07:19:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:25.161 07:19:26 -- common/autotest_common.sh@10 -- # set +x 00:13:25.728 07:19:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:25.729 07:19:27 -- target/connect_stress.sh@34 -- # kill -0 81708 00:13:25.729 07:19:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.729 07:19:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:25.729 07:19:27 -- common/autotest_common.sh@10 -- # set +x 00:13:25.993 07:19:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:25.993 07:19:27 -- target/connect_stress.sh@34 -- # kill -0 81708 00:13:25.993 07:19:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.993 07:19:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:25.993 07:19:27 -- common/autotest_common.sh@10 -- # set +x 00:13:26.251 07:19:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.251 07:19:27 -- target/connect_stress.sh@34 -- # kill -0 81708 00:13:26.251 07:19:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.251 07:19:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.251 07:19:27 -- common/autotest_common.sh@10 -- # set +x 00:13:26.511 07:19:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.511 07:19:28 -- target/connect_stress.sh@34 -- # kill -0 81708 00:13:26.511 07:19:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.511 07:19:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.511 07:19:28 -- common/autotest_common.sh@10 -- # set +x 00:13:26.770 07:19:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.770 07:19:28 -- target/connect_stress.sh@34 -- # kill -0 81708 00:13:26.770 07:19:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.770 07:19:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.770 07:19:28 -- common/autotest_common.sh@10 -- # set +x 00:13:27.338 07:19:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.338 07:19:28 -- target/connect_stress.sh@34 -- # kill -0 81708 00:13:27.338 07:19:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.338 07:19:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.338 07:19:28 -- common/autotest_common.sh@10 -- # set +x 00:13:27.597 07:19:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.597 07:19:29 -- target/connect_stress.sh@34 -- # kill -0 81708 00:13:27.597 07:19:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.597 07:19:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.597 07:19:29 -- common/autotest_common.sh@10 -- # set +x 00:13:27.855 07:19:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.855 07:19:29 -- target/connect_stress.sh@34 -- # kill -0 81708 00:13:27.855 07:19:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.855 07:19:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.855 07:19:29 -- common/autotest_common.sh@10 -- # set +x 00:13:28.114 07:19:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:28.114 07:19:29 -- target/connect_stress.sh@34 -- # kill -0 81708 00:13:28.114 07:19:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.114 07:19:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:28.114 07:19:29 -- common/autotest_common.sh@10 -- # set +x 00:13:28.682 07:19:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:28.682 07:19:30 -- target/connect_stress.sh@34 -- # kill -0 81708 00:13:28.682 07:19:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.682 07:19:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:28.682 07:19:30 -- common/autotest_common.sh@10 -- # set +x 00:13:28.941 07:19:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:28.941 07:19:30 -- target/connect_stress.sh@34 -- # kill -0 81708 00:13:28.941 07:19:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.941 07:19:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:28.941 07:19:30 -- common/autotest_common.sh@10 -- # set +x 00:13:29.199 07:19:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.199 07:19:30 -- target/connect_stress.sh@34 -- # kill -0 81708 00:13:29.199 07:19:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.199 07:19:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:29.199 07:19:30 -- common/autotest_common.sh@10 -- # set +x 00:13:29.459 07:19:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.459 07:19:31 -- target/connect_stress.sh@34 -- # kill -0 81708 00:13:29.459 07:19:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.459 07:19:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:29.459 07:19:31 -- common/autotest_common.sh@10 -- # set +x 00:13:29.718 07:19:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.718 07:19:31 -- target/connect_stress.sh@34 -- # kill -0 81708 00:13:29.718 07:19:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.718 07:19:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:29.718 07:19:31 -- common/autotest_common.sh@10 -- # set +x 00:13:30.287 07:19:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.287 07:19:31 -- target/connect_stress.sh@34 -- # kill -0 81708 00:13:30.287 07:19:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.287 07:19:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.287 07:19:31 -- common/autotest_common.sh@10 -- # set +x 00:13:30.546 07:19:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.546 07:19:32 -- target/connect_stress.sh@34 -- # kill -0 81708 00:13:30.546 07:19:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.546 07:19:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.546 07:19:32 -- common/autotest_common.sh@10 -- # set +x 00:13:30.805 07:19:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.805 07:19:32 -- target/connect_stress.sh@34 -- # kill -0 81708 00:13:30.805 07:19:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.805 07:19:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.805 07:19:32 -- common/autotest_common.sh@10 -- # set +x 00:13:31.064 07:19:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:31.065 07:19:32 -- target/connect_stress.sh@34 -- # kill -0 81708 00:13:31.065 07:19:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.065 07:19:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:31.065 07:19:32 -- common/autotest_common.sh@10 -- # set +x 00:13:31.322 07:19:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:31.322 07:19:33 -- target/connect_stress.sh@34 -- # kill -0 81708 00:13:31.322 07:19:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.322 07:19:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:31.322 07:19:33 -- common/autotest_common.sh@10 -- # set +x 00:13:31.580 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:31.839 07:19:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:31.839 07:19:33 -- target/connect_stress.sh@34 -- # kill -0 81708 00:13:31.839 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (81708) - No such process 00:13:31.839 07:19:33 -- target/connect_stress.sh@38 -- # wait 81708 00:13:31.839 07:19:33 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:31.839 07:19:33 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:31.839 07:19:33 -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:31.839 07:19:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:31.839 07:19:33 -- nvmf/common.sh@116 -- # sync 00:13:31.839 07:19:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:31.839 07:19:33 -- nvmf/common.sh@119 -- # set +e 00:13:31.839 07:19:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:31.839 07:19:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:31.839 rmmod nvme_tcp 00:13:31.839 rmmod nvme_fabrics 00:13:31.839 rmmod nvme_keyring 00:13:31.839 07:19:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:31.839 07:19:33 -- nvmf/common.sh@123 -- # set -e 00:13:31.839 07:19:33 -- nvmf/common.sh@124 -- # return 0 00:13:31.839 07:19:33 -- nvmf/common.sh@477 -- # '[' -n 81656 ']' 00:13:31.839 07:19:33 -- nvmf/common.sh@478 -- # killprocess 81656 00:13:31.839 07:19:33 -- common/autotest_common.sh@926 -- # '[' -z 81656 ']' 00:13:31.839 07:19:33 -- common/autotest_common.sh@930 -- # kill -0 81656 00:13:31.839 07:19:33 -- common/autotest_common.sh@931 -- # uname 00:13:31.839 07:19:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:31.839 07:19:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 81656 00:13:31.839 07:19:33 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:31.839 07:19:33 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:31.839 killing process with pid 81656 00:13:31.839 07:19:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 81656' 00:13:31.839 07:19:33 -- common/autotest_common.sh@945 -- # kill 81656 00:13:31.839 07:19:33 -- common/autotest_common.sh@950 -- # wait 81656 00:13:32.098 07:19:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:32.098 07:19:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:32.098 07:19:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:32.098 07:19:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:32.098 07:19:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:32.098 07:19:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.098 07:19:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:32.098 07:19:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.098 07:19:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:32.098 00:13:32.098 real 0m12.387s 00:13:32.098 user 0m41.560s 00:13:32.098 sys 0m3.044s 00:13:32.098 07:19:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:32.099 07:19:33 -- common/autotest_common.sh@10 -- # set +x 00:13:32.099 ************************************ 00:13:32.099 END TEST nvmf_connect_stress 00:13:32.099 ************************************ 00:13:32.099 07:19:33 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:32.099 07:19:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:32.099 07:19:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:32.099 07:19:33 -- common/autotest_common.sh@10 -- # set +x 00:13:32.358 ************************************ 00:13:32.358 START TEST nvmf_fused_ordering 00:13:32.358 ************************************ 00:13:32.358 07:19:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:32.358 * Looking for test storage... 00:13:32.358 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:32.358 07:19:34 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:32.358 07:19:34 -- nvmf/common.sh@7 -- # uname -s 00:13:32.358 07:19:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:32.358 07:19:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:32.358 07:19:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:32.358 07:19:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:32.358 07:19:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:32.358 07:19:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:32.358 07:19:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:32.358 07:19:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:32.358 07:19:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:32.358 07:19:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:32.358 07:19:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:13:32.358 07:19:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:13:32.358 07:19:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:32.358 07:19:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:32.358 07:19:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:32.358 07:19:34 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:32.358 07:19:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:32.358 07:19:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:32.358 07:19:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:32.358 07:19:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.358 07:19:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.358 07:19:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.358 07:19:34 -- paths/export.sh@5 -- # export PATH 00:13:32.358 07:19:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.358 07:19:34 -- nvmf/common.sh@46 -- # : 0 00:13:32.358 07:19:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:32.358 07:19:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:32.358 07:19:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:32.358 07:19:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:32.358 07:19:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:32.358 07:19:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:32.358 07:19:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:32.358 07:19:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:32.358 07:19:34 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:32.358 07:19:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:32.358 07:19:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:32.358 07:19:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:32.358 07:19:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:32.358 07:19:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:32.358 07:19:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.358 07:19:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:32.358 07:19:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.358 07:19:34 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:32.358 07:19:34 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:32.358 07:19:34 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:32.358 07:19:34 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:32.358 07:19:34 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:32.358 07:19:34 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:32.358 07:19:34 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:32.358 07:19:34 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:32.358 07:19:34 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:32.358 07:19:34 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:32.358 07:19:34 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:32.358 07:19:34 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:32.358 07:19:34 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:32.358 07:19:34 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:32.358 07:19:34 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:32.358 07:19:34 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:32.358 07:19:34 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:32.358 07:19:34 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:32.358 07:19:34 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:32.358 07:19:34 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:32.358 Cannot find device "nvmf_tgt_br" 00:13:32.358 07:19:34 -- nvmf/common.sh@154 -- # true 00:13:32.358 07:19:34 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:32.358 Cannot find device "nvmf_tgt_br2" 00:13:32.358 07:19:34 -- nvmf/common.sh@155 -- # true 00:13:32.358 07:19:34 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:32.358 07:19:34 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:32.358 Cannot find device "nvmf_tgt_br" 00:13:32.358 07:19:34 -- nvmf/common.sh@157 -- # true 00:13:32.358 07:19:34 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:32.358 Cannot find device "nvmf_tgt_br2" 00:13:32.358 07:19:34 -- nvmf/common.sh@158 -- # true 00:13:32.358 07:19:34 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:32.359 07:19:34 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:32.359 07:19:34 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:32.359 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:32.359 07:19:34 -- nvmf/common.sh@161 -- # true 00:13:32.359 07:19:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:32.359 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:32.359 07:19:34 -- nvmf/common.sh@162 -- # true 00:13:32.359 07:19:34 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:32.359 07:19:34 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:32.359 07:19:34 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:32.359 07:19:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:32.359 07:19:34 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:32.618 07:19:34 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:32.618 07:19:34 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:32.618 07:19:34 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:32.618 07:19:34 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:32.618 07:19:34 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:32.618 07:19:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:32.618 07:19:34 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:32.618 07:19:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:32.618 07:19:34 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:32.618 07:19:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:32.618 07:19:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:32.618 07:19:34 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:32.618 07:19:34 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:32.618 07:19:34 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:32.618 07:19:34 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:32.618 07:19:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:32.618 07:19:34 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:32.618 07:19:34 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:32.618 07:19:34 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:32.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:32.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:13:32.618 00:13:32.618 --- 10.0.0.2 ping statistics --- 00:13:32.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.618 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:13:32.618 07:19:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:32.618 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:32.618 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:13:32.618 00:13:32.618 --- 10.0.0.3 ping statistics --- 00:13:32.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.618 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:13:32.618 07:19:34 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:32.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:32.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:13:32.618 00:13:32.618 --- 10.0.0.1 ping statistics --- 00:13:32.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.618 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:13:32.618 07:19:34 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:32.618 07:19:34 -- nvmf/common.sh@421 -- # return 0 00:13:32.618 07:19:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:32.618 07:19:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:32.618 07:19:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:32.618 07:19:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:32.618 07:19:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:32.618 07:19:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:32.618 07:19:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:32.618 07:19:34 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:32.618 07:19:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:32.618 07:19:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:32.618 07:19:34 -- common/autotest_common.sh@10 -- # set +x 00:13:32.618 07:19:34 -- nvmf/common.sh@469 -- # nvmfpid=82028 00:13:32.618 07:19:34 -- nvmf/common.sh@470 -- # waitforlisten 82028 00:13:32.618 07:19:34 -- common/autotest_common.sh@819 -- # '[' -z 82028 ']' 00:13:32.618 07:19:34 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:32.618 07:19:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.618 07:19:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:32.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.618 07:19:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.618 07:19:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:32.618 07:19:34 -- common/autotest_common.sh@10 -- # set +x 00:13:32.618 [2024-11-04 07:19:34.422678] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:32.618 [2024-11-04 07:19:34.422782] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:32.877 [2024-11-04 07:19:34.556940] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.877 [2024-11-04 07:19:34.635028] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:32.877 [2024-11-04 07:19:34.635181] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:32.877 [2024-11-04 07:19:34.635196] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:32.877 [2024-11-04 07:19:34.635204] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:32.877 [2024-11-04 07:19:34.635233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.813 07:19:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:33.813 07:19:35 -- common/autotest_common.sh@852 -- # return 0 00:13:33.813 07:19:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:33.813 07:19:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:33.813 07:19:35 -- common/autotest_common.sh@10 -- # set +x 00:13:33.813 07:19:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:33.813 07:19:35 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:33.813 07:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:33.813 07:19:35 -- common/autotest_common.sh@10 -- # set +x 00:13:33.814 [2024-11-04 07:19:35.418722] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:33.814 07:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:33.814 07:19:35 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:33.814 07:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:33.814 07:19:35 -- common/autotest_common.sh@10 -- # set +x 00:13:33.814 07:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:33.814 07:19:35 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:33.814 07:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:33.814 07:19:35 -- common/autotest_common.sh@10 -- # set +x 00:13:33.814 [2024-11-04 07:19:35.434860] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:33.814 07:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:33.814 07:19:35 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:33.814 07:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:33.814 07:19:35 -- common/autotest_common.sh@10 -- # set +x 00:13:33.814 NULL1 00:13:33.814 07:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:33.814 07:19:35 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:33.814 07:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:33.814 07:19:35 -- common/autotest_common.sh@10 -- # set +x 00:13:33.814 07:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:33.814 07:19:35 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:33.814 07:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:33.814 07:19:35 -- common/autotest_common.sh@10 -- # set +x 00:13:33.814 07:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:33.814 07:19:35 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:33.814 [2024-11-04 07:19:35.485217] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:33.814 [2024-11-04 07:19:35.485268] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82078 ] 00:13:34.382 Attached to nqn.2016-06.io.spdk:cnode1 00:13:34.382 Namespace ID: 1 size: 1GB 00:13:34.382 fused_ordering(0) 00:13:34.382 fused_ordering(1) 00:13:34.382 fused_ordering(2) 00:13:34.382 fused_ordering(3) 00:13:34.382 fused_ordering(4) 00:13:34.382 fused_ordering(5) 00:13:34.382 fused_ordering(6) 00:13:34.382 fused_ordering(7) 00:13:34.382 fused_ordering(8) 00:13:34.382 fused_ordering(9) 00:13:34.382 fused_ordering(10) 00:13:34.382 fused_ordering(11) 00:13:34.382 fused_ordering(12) 00:13:34.382 fused_ordering(13) 00:13:34.382 fused_ordering(14) 00:13:34.382 fused_ordering(15) 00:13:34.382 fused_ordering(16) 00:13:34.382 fused_ordering(17) 00:13:34.382 fused_ordering(18) 00:13:34.382 fused_ordering(19) 00:13:34.382 fused_ordering(20) 00:13:34.382 fused_ordering(21) 00:13:34.382 fused_ordering(22) 00:13:34.382 fused_ordering(23) 00:13:34.382 fused_ordering(24) 00:13:34.382 fused_ordering(25) 00:13:34.382 fused_ordering(26) 00:13:34.382 fused_ordering(27) 00:13:34.382 fused_ordering(28) 00:13:34.382 fused_ordering(29) 00:13:34.382 fused_ordering(30) 00:13:34.382 fused_ordering(31) 00:13:34.382 fused_ordering(32) 00:13:34.382 fused_ordering(33) 00:13:34.382 fused_ordering(34) 00:13:34.382 fused_ordering(35) 00:13:34.382 fused_ordering(36) 00:13:34.382 fused_ordering(37) 00:13:34.382 fused_ordering(38) 00:13:34.382 fused_ordering(39) 00:13:34.382 fused_ordering(40) 00:13:34.382 fused_ordering(41) 00:13:34.382 fused_ordering(42) 00:13:34.382 fused_ordering(43) 00:13:34.382 fused_ordering(44) 00:13:34.382 fused_ordering(45) 00:13:34.382 fused_ordering(46) 00:13:34.382 fused_ordering(47) 00:13:34.382 fused_ordering(48) 00:13:34.382 fused_ordering(49) 00:13:34.382 fused_ordering(50) 00:13:34.382 fused_ordering(51) 00:13:34.382 fused_ordering(52) 00:13:34.382 fused_ordering(53) 00:13:34.382 fused_ordering(54) 00:13:34.382 fused_ordering(55) 00:13:34.382 fused_ordering(56) 00:13:34.382 fused_ordering(57) 00:13:34.382 fused_ordering(58) 00:13:34.382 fused_ordering(59) 00:13:34.382 fused_ordering(60) 00:13:34.382 fused_ordering(61) 00:13:34.382 fused_ordering(62) 00:13:34.382 fused_ordering(63) 00:13:34.382 fused_ordering(64) 00:13:34.382 fused_ordering(65) 00:13:34.382 fused_ordering(66) 00:13:34.382 fused_ordering(67) 00:13:34.382 fused_ordering(68) 00:13:34.382 fused_ordering(69) 00:13:34.382 fused_ordering(70) 00:13:34.382 fused_ordering(71) 00:13:34.382 fused_ordering(72) 00:13:34.382 fused_ordering(73) 00:13:34.382 fused_ordering(74) 00:13:34.382 fused_ordering(75) 00:13:34.382 fused_ordering(76) 00:13:34.382 fused_ordering(77) 00:13:34.382 fused_ordering(78) 00:13:34.382 fused_ordering(79) 00:13:34.382 fused_ordering(80) 00:13:34.382 fused_ordering(81) 00:13:34.382 fused_ordering(82) 00:13:34.382 fused_ordering(83) 00:13:34.382 fused_ordering(84) 00:13:34.382 fused_ordering(85) 00:13:34.382 fused_ordering(86) 00:13:34.382 fused_ordering(87) 00:13:34.382 fused_ordering(88) 00:13:34.382 fused_ordering(89) 00:13:34.382 fused_ordering(90) 00:13:34.382 fused_ordering(91) 00:13:34.382 fused_ordering(92) 00:13:34.382 fused_ordering(93) 00:13:34.382 fused_ordering(94) 00:13:34.382 fused_ordering(95) 00:13:34.382 fused_ordering(96) 00:13:34.382 fused_ordering(97) 00:13:34.382 fused_ordering(98) 00:13:34.382 fused_ordering(99) 00:13:34.382 fused_ordering(100) 00:13:34.382 fused_ordering(101) 00:13:34.382 fused_ordering(102) 00:13:34.382 fused_ordering(103) 00:13:34.382 fused_ordering(104) 00:13:34.382 fused_ordering(105) 00:13:34.382 fused_ordering(106) 00:13:34.382 fused_ordering(107) 00:13:34.382 fused_ordering(108) 00:13:34.382 fused_ordering(109) 00:13:34.382 fused_ordering(110) 00:13:34.382 fused_ordering(111) 00:13:34.382 fused_ordering(112) 00:13:34.383 fused_ordering(113) 00:13:34.383 fused_ordering(114) 00:13:34.383 fused_ordering(115) 00:13:34.383 fused_ordering(116) 00:13:34.383 fused_ordering(117) 00:13:34.383 fused_ordering(118) 00:13:34.383 fused_ordering(119) 00:13:34.383 fused_ordering(120) 00:13:34.383 fused_ordering(121) 00:13:34.383 fused_ordering(122) 00:13:34.383 fused_ordering(123) 00:13:34.383 fused_ordering(124) 00:13:34.383 fused_ordering(125) 00:13:34.383 fused_ordering(126) 00:13:34.383 fused_ordering(127) 00:13:34.383 fused_ordering(128) 00:13:34.383 fused_ordering(129) 00:13:34.383 fused_ordering(130) 00:13:34.383 fused_ordering(131) 00:13:34.383 fused_ordering(132) 00:13:34.383 fused_ordering(133) 00:13:34.383 fused_ordering(134) 00:13:34.383 fused_ordering(135) 00:13:34.383 fused_ordering(136) 00:13:34.383 fused_ordering(137) 00:13:34.383 fused_ordering(138) 00:13:34.383 fused_ordering(139) 00:13:34.383 fused_ordering(140) 00:13:34.383 fused_ordering(141) 00:13:34.383 fused_ordering(142) 00:13:34.383 fused_ordering(143) 00:13:34.383 fused_ordering(144) 00:13:34.383 fused_ordering(145) 00:13:34.383 fused_ordering(146) 00:13:34.383 fused_ordering(147) 00:13:34.383 fused_ordering(148) 00:13:34.383 fused_ordering(149) 00:13:34.383 fused_ordering(150) 00:13:34.383 fused_ordering(151) 00:13:34.383 fused_ordering(152) 00:13:34.383 fused_ordering(153) 00:13:34.383 fused_ordering(154) 00:13:34.383 fused_ordering(155) 00:13:34.383 fused_ordering(156) 00:13:34.383 fused_ordering(157) 00:13:34.383 fused_ordering(158) 00:13:34.383 fused_ordering(159) 00:13:34.383 fused_ordering(160) 00:13:34.383 fused_ordering(161) 00:13:34.383 fused_ordering(162) 00:13:34.383 fused_ordering(163) 00:13:34.383 fused_ordering(164) 00:13:34.383 fused_ordering(165) 00:13:34.383 fused_ordering(166) 00:13:34.383 fused_ordering(167) 00:13:34.383 fused_ordering(168) 00:13:34.383 fused_ordering(169) 00:13:34.383 fused_ordering(170) 00:13:34.383 fused_ordering(171) 00:13:34.383 fused_ordering(172) 00:13:34.383 fused_ordering(173) 00:13:34.383 fused_ordering(174) 00:13:34.383 fused_ordering(175) 00:13:34.383 fused_ordering(176) 00:13:34.383 fused_ordering(177) 00:13:34.383 fused_ordering(178) 00:13:34.383 fused_ordering(179) 00:13:34.383 fused_ordering(180) 00:13:34.383 fused_ordering(181) 00:13:34.383 fused_ordering(182) 00:13:34.383 fused_ordering(183) 00:13:34.383 fused_ordering(184) 00:13:34.383 fused_ordering(185) 00:13:34.383 fused_ordering(186) 00:13:34.383 fused_ordering(187) 00:13:34.383 fused_ordering(188) 00:13:34.383 fused_ordering(189) 00:13:34.383 fused_ordering(190) 00:13:34.383 fused_ordering(191) 00:13:34.383 fused_ordering(192) 00:13:34.383 fused_ordering(193) 00:13:34.383 fused_ordering(194) 00:13:34.383 fused_ordering(195) 00:13:34.383 fused_ordering(196) 00:13:34.383 fused_ordering(197) 00:13:34.383 fused_ordering(198) 00:13:34.383 fused_ordering(199) 00:13:34.383 fused_ordering(200) 00:13:34.383 fused_ordering(201) 00:13:34.383 fused_ordering(202) 00:13:34.383 fused_ordering(203) 00:13:34.383 fused_ordering(204) 00:13:34.383 fused_ordering(205) 00:13:34.383 fused_ordering(206) 00:13:34.383 fused_ordering(207) 00:13:34.383 fused_ordering(208) 00:13:34.383 fused_ordering(209) 00:13:34.383 fused_ordering(210) 00:13:34.383 fused_ordering(211) 00:13:34.383 fused_ordering(212) 00:13:34.383 fused_ordering(213) 00:13:34.383 fused_ordering(214) 00:13:34.383 fused_ordering(215) 00:13:34.383 fused_ordering(216) 00:13:34.383 fused_ordering(217) 00:13:34.383 fused_ordering(218) 00:13:34.383 fused_ordering(219) 00:13:34.383 fused_ordering(220) 00:13:34.383 fused_ordering(221) 00:13:34.383 fused_ordering(222) 00:13:34.383 fused_ordering(223) 00:13:34.383 fused_ordering(224) 00:13:34.383 fused_ordering(225) 00:13:34.383 fused_ordering(226) 00:13:34.383 fused_ordering(227) 00:13:34.383 fused_ordering(228) 00:13:34.383 fused_ordering(229) 00:13:34.383 fused_ordering(230) 00:13:34.383 fused_ordering(231) 00:13:34.383 fused_ordering(232) 00:13:34.383 fused_ordering(233) 00:13:34.383 fused_ordering(234) 00:13:34.383 fused_ordering(235) 00:13:34.383 fused_ordering(236) 00:13:34.383 fused_ordering(237) 00:13:34.383 fused_ordering(238) 00:13:34.383 fused_ordering(239) 00:13:34.383 fused_ordering(240) 00:13:34.383 fused_ordering(241) 00:13:34.383 fused_ordering(242) 00:13:34.383 fused_ordering(243) 00:13:34.383 fused_ordering(244) 00:13:34.383 fused_ordering(245) 00:13:34.383 fused_ordering(246) 00:13:34.383 fused_ordering(247) 00:13:34.383 fused_ordering(248) 00:13:34.383 fused_ordering(249) 00:13:34.383 fused_ordering(250) 00:13:34.383 fused_ordering(251) 00:13:34.383 fused_ordering(252) 00:13:34.383 fused_ordering(253) 00:13:34.383 fused_ordering(254) 00:13:34.383 fused_ordering(255) 00:13:34.383 fused_ordering(256) 00:13:34.383 fused_ordering(257) 00:13:34.383 fused_ordering(258) 00:13:34.383 fused_ordering(259) 00:13:34.383 fused_ordering(260) 00:13:34.383 fused_ordering(261) 00:13:34.383 fused_ordering(262) 00:13:34.383 fused_ordering(263) 00:13:34.383 fused_ordering(264) 00:13:34.383 fused_ordering(265) 00:13:34.383 fused_ordering(266) 00:13:34.383 fused_ordering(267) 00:13:34.383 fused_ordering(268) 00:13:34.383 fused_ordering(269) 00:13:34.383 fused_ordering(270) 00:13:34.383 fused_ordering(271) 00:13:34.383 fused_ordering(272) 00:13:34.383 fused_ordering(273) 00:13:34.383 fused_ordering(274) 00:13:34.383 fused_ordering(275) 00:13:34.383 fused_ordering(276) 00:13:34.383 fused_ordering(277) 00:13:34.383 fused_ordering(278) 00:13:34.383 fused_ordering(279) 00:13:34.383 fused_ordering(280) 00:13:34.383 fused_ordering(281) 00:13:34.383 fused_ordering(282) 00:13:34.383 fused_ordering(283) 00:13:34.383 fused_ordering(284) 00:13:34.383 fused_ordering(285) 00:13:34.383 fused_ordering(286) 00:13:34.383 fused_ordering(287) 00:13:34.383 fused_ordering(288) 00:13:34.383 fused_ordering(289) 00:13:34.383 fused_ordering(290) 00:13:34.383 fused_ordering(291) 00:13:34.383 fused_ordering(292) 00:13:34.383 fused_ordering(293) 00:13:34.383 fused_ordering(294) 00:13:34.383 fused_ordering(295) 00:13:34.383 fused_ordering(296) 00:13:34.383 fused_ordering(297) 00:13:34.383 fused_ordering(298) 00:13:34.383 fused_ordering(299) 00:13:34.383 fused_ordering(300) 00:13:34.383 fused_ordering(301) 00:13:34.383 fused_ordering(302) 00:13:34.383 fused_ordering(303) 00:13:34.383 fused_ordering(304) 00:13:34.383 fused_ordering(305) 00:13:34.383 fused_ordering(306) 00:13:34.383 fused_ordering(307) 00:13:34.383 fused_ordering(308) 00:13:34.383 fused_ordering(309) 00:13:34.383 fused_ordering(310) 00:13:34.383 fused_ordering(311) 00:13:34.383 fused_ordering(312) 00:13:34.383 fused_ordering(313) 00:13:34.383 fused_ordering(314) 00:13:34.383 fused_ordering(315) 00:13:34.383 fused_ordering(316) 00:13:34.383 fused_ordering(317) 00:13:34.383 fused_ordering(318) 00:13:34.383 fused_ordering(319) 00:13:34.383 fused_ordering(320) 00:13:34.383 fused_ordering(321) 00:13:34.383 fused_ordering(322) 00:13:34.383 fused_ordering(323) 00:13:34.383 fused_ordering(324) 00:13:34.383 fused_ordering(325) 00:13:34.383 fused_ordering(326) 00:13:34.383 fused_ordering(327) 00:13:34.383 fused_ordering(328) 00:13:34.383 fused_ordering(329) 00:13:34.383 fused_ordering(330) 00:13:34.383 fused_ordering(331) 00:13:34.383 fused_ordering(332) 00:13:34.383 fused_ordering(333) 00:13:34.383 fused_ordering(334) 00:13:34.383 fused_ordering(335) 00:13:34.384 fused_ordering(336) 00:13:34.384 fused_ordering(337) 00:13:34.384 fused_ordering(338) 00:13:34.384 fused_ordering(339) 00:13:34.384 fused_ordering(340) 00:13:34.384 fused_ordering(341) 00:13:34.384 fused_ordering(342) 00:13:34.384 fused_ordering(343) 00:13:34.384 fused_ordering(344) 00:13:34.384 fused_ordering(345) 00:13:34.384 fused_ordering(346) 00:13:34.384 fused_ordering(347) 00:13:34.384 fused_ordering(348) 00:13:34.384 fused_ordering(349) 00:13:34.384 fused_ordering(350) 00:13:34.384 fused_ordering(351) 00:13:34.384 fused_ordering(352) 00:13:34.384 fused_ordering(353) 00:13:34.384 fused_ordering(354) 00:13:34.384 fused_ordering(355) 00:13:34.384 fused_ordering(356) 00:13:34.384 fused_ordering(357) 00:13:34.384 fused_ordering(358) 00:13:34.384 fused_ordering(359) 00:13:34.384 fused_ordering(360) 00:13:34.384 fused_ordering(361) 00:13:34.384 fused_ordering(362) 00:13:34.384 fused_ordering(363) 00:13:34.384 fused_ordering(364) 00:13:34.384 fused_ordering(365) 00:13:34.384 fused_ordering(366) 00:13:34.384 fused_ordering(367) 00:13:34.384 fused_ordering(368) 00:13:34.384 fused_ordering(369) 00:13:34.384 fused_ordering(370) 00:13:34.384 fused_ordering(371) 00:13:34.384 fused_ordering(372) 00:13:34.384 fused_ordering(373) 00:13:34.384 fused_ordering(374) 00:13:34.384 fused_ordering(375) 00:13:34.384 fused_ordering(376) 00:13:34.384 fused_ordering(377) 00:13:34.384 fused_ordering(378) 00:13:34.384 fused_ordering(379) 00:13:34.384 fused_ordering(380) 00:13:34.384 fused_ordering(381) 00:13:34.384 fused_ordering(382) 00:13:34.384 fused_ordering(383) 00:13:34.384 fused_ordering(384) 00:13:34.384 fused_ordering(385) 00:13:34.384 fused_ordering(386) 00:13:34.384 fused_ordering(387) 00:13:34.384 fused_ordering(388) 00:13:34.384 fused_ordering(389) 00:13:34.384 fused_ordering(390) 00:13:34.384 fused_ordering(391) 00:13:34.384 fused_ordering(392) 00:13:34.384 fused_ordering(393) 00:13:34.384 fused_ordering(394) 00:13:34.384 fused_ordering(395) 00:13:34.384 fused_ordering(396) 00:13:34.384 fused_ordering(397) 00:13:34.384 fused_ordering(398) 00:13:34.384 fused_ordering(399) 00:13:34.384 fused_ordering(400) 00:13:34.384 fused_ordering(401) 00:13:34.384 fused_ordering(402) 00:13:34.384 fused_ordering(403) 00:13:34.384 fused_ordering(404) 00:13:34.384 fused_ordering(405) 00:13:34.384 fused_ordering(406) 00:13:34.384 fused_ordering(407) 00:13:34.384 fused_ordering(408) 00:13:34.384 fused_ordering(409) 00:13:34.384 fused_ordering(410) 00:13:34.643 fused_ordering(411) 00:13:34.643 fused_ordering(412) 00:13:34.643 fused_ordering(413) 00:13:34.643 fused_ordering(414) 00:13:34.643 fused_ordering(415) 00:13:34.643 fused_ordering(416) 00:13:34.643 fused_ordering(417) 00:13:34.643 fused_ordering(418) 00:13:34.643 fused_ordering(419) 00:13:34.643 fused_ordering(420) 00:13:34.643 fused_ordering(421) 00:13:34.643 fused_ordering(422) 00:13:34.643 fused_ordering(423) 00:13:34.643 fused_ordering(424) 00:13:34.643 fused_ordering(425) 00:13:34.643 fused_ordering(426) 00:13:34.643 fused_ordering(427) 00:13:34.643 fused_ordering(428) 00:13:34.643 fused_ordering(429) 00:13:34.643 fused_ordering(430) 00:13:34.643 fused_ordering(431) 00:13:34.643 fused_ordering(432) 00:13:34.643 fused_ordering(433) 00:13:34.643 fused_ordering(434) 00:13:34.643 fused_ordering(435) 00:13:34.643 fused_ordering(436) 00:13:34.643 fused_ordering(437) 00:13:34.643 fused_ordering(438) 00:13:34.643 fused_ordering(439) 00:13:34.643 fused_ordering(440) 00:13:34.643 fused_ordering(441) 00:13:34.643 fused_ordering(442) 00:13:34.643 fused_ordering(443) 00:13:34.643 fused_ordering(444) 00:13:34.643 fused_ordering(445) 00:13:34.643 fused_ordering(446) 00:13:34.643 fused_ordering(447) 00:13:34.643 fused_ordering(448) 00:13:34.643 fused_ordering(449) 00:13:34.643 fused_ordering(450) 00:13:34.643 fused_ordering(451) 00:13:34.643 fused_ordering(452) 00:13:34.643 fused_ordering(453) 00:13:34.643 fused_ordering(454) 00:13:34.643 fused_ordering(455) 00:13:34.643 fused_ordering(456) 00:13:34.643 fused_ordering(457) 00:13:34.643 fused_ordering(458) 00:13:34.643 fused_ordering(459) 00:13:34.643 fused_ordering(460) 00:13:34.643 fused_ordering(461) 00:13:34.643 fused_ordering(462) 00:13:34.643 fused_ordering(463) 00:13:34.643 fused_ordering(464) 00:13:34.643 fused_ordering(465) 00:13:34.643 fused_ordering(466) 00:13:34.643 fused_ordering(467) 00:13:34.643 fused_ordering(468) 00:13:34.643 fused_ordering(469) 00:13:34.643 fused_ordering(470) 00:13:34.643 fused_ordering(471) 00:13:34.643 fused_ordering(472) 00:13:34.643 fused_ordering(473) 00:13:34.643 fused_ordering(474) 00:13:34.643 fused_ordering(475) 00:13:34.643 fused_ordering(476) 00:13:34.643 fused_ordering(477) 00:13:34.643 fused_ordering(478) 00:13:34.643 fused_ordering(479) 00:13:34.643 fused_ordering(480) 00:13:34.643 fused_ordering(481) 00:13:34.643 fused_ordering(482) 00:13:34.643 fused_ordering(483) 00:13:34.643 fused_ordering(484) 00:13:34.643 fused_ordering(485) 00:13:34.643 fused_ordering(486) 00:13:34.643 fused_ordering(487) 00:13:34.643 fused_ordering(488) 00:13:34.643 fused_ordering(489) 00:13:34.643 fused_ordering(490) 00:13:34.643 fused_ordering(491) 00:13:34.643 fused_ordering(492) 00:13:34.643 fused_ordering(493) 00:13:34.643 fused_ordering(494) 00:13:34.643 fused_ordering(495) 00:13:34.643 fused_ordering(496) 00:13:34.643 fused_ordering(497) 00:13:34.643 fused_ordering(498) 00:13:34.643 fused_ordering(499) 00:13:34.643 fused_ordering(500) 00:13:34.643 fused_ordering(501) 00:13:34.643 fused_ordering(502) 00:13:34.643 fused_ordering(503) 00:13:34.643 fused_ordering(504) 00:13:34.643 fused_ordering(505) 00:13:34.643 fused_ordering(506) 00:13:34.643 fused_ordering(507) 00:13:34.643 fused_ordering(508) 00:13:34.643 fused_ordering(509) 00:13:34.643 fused_ordering(510) 00:13:34.643 fused_ordering(511) 00:13:34.643 fused_ordering(512) 00:13:34.643 fused_ordering(513) 00:13:34.644 fused_ordering(514) 00:13:34.644 fused_ordering(515) 00:13:34.644 fused_ordering(516) 00:13:34.644 fused_ordering(517) 00:13:34.644 fused_ordering(518) 00:13:34.644 fused_ordering(519) 00:13:34.644 fused_ordering(520) 00:13:34.644 fused_ordering(521) 00:13:34.644 fused_ordering(522) 00:13:34.644 fused_ordering(523) 00:13:34.644 fused_ordering(524) 00:13:34.644 fused_ordering(525) 00:13:34.644 fused_ordering(526) 00:13:34.644 fused_ordering(527) 00:13:34.644 fused_ordering(528) 00:13:34.644 fused_ordering(529) 00:13:34.644 fused_ordering(530) 00:13:34.644 fused_ordering(531) 00:13:34.644 fused_ordering(532) 00:13:34.644 fused_ordering(533) 00:13:34.644 fused_ordering(534) 00:13:34.644 fused_ordering(535) 00:13:34.644 fused_ordering(536) 00:13:34.644 fused_ordering(537) 00:13:34.644 fused_ordering(538) 00:13:34.644 fused_ordering(539) 00:13:34.644 fused_ordering(540) 00:13:34.644 fused_ordering(541) 00:13:34.644 fused_ordering(542) 00:13:34.644 fused_ordering(543) 00:13:34.644 fused_ordering(544) 00:13:34.644 fused_ordering(545) 00:13:34.644 fused_ordering(546) 00:13:34.644 fused_ordering(547) 00:13:34.644 fused_ordering(548) 00:13:34.644 fused_ordering(549) 00:13:34.644 fused_ordering(550) 00:13:34.644 fused_ordering(551) 00:13:34.644 fused_ordering(552) 00:13:34.644 fused_ordering(553) 00:13:34.644 fused_ordering(554) 00:13:34.644 fused_ordering(555) 00:13:34.644 fused_ordering(556) 00:13:34.644 fused_ordering(557) 00:13:34.644 fused_ordering(558) 00:13:34.644 fused_ordering(559) 00:13:34.644 fused_ordering(560) 00:13:34.644 fused_ordering(561) 00:13:34.644 fused_ordering(562) 00:13:34.644 fused_ordering(563) 00:13:34.644 fused_ordering(564) 00:13:34.644 fused_ordering(565) 00:13:34.644 fused_ordering(566) 00:13:34.644 fused_ordering(567) 00:13:34.644 fused_ordering(568) 00:13:34.644 fused_ordering(569) 00:13:34.644 fused_ordering(570) 00:13:34.644 fused_ordering(571) 00:13:34.644 fused_ordering(572) 00:13:34.644 fused_ordering(573) 00:13:34.644 fused_ordering(574) 00:13:34.644 fused_ordering(575) 00:13:34.644 fused_ordering(576) 00:13:34.644 fused_ordering(577) 00:13:34.644 fused_ordering(578) 00:13:34.644 fused_ordering(579) 00:13:34.644 fused_ordering(580) 00:13:34.644 fused_ordering(581) 00:13:34.644 fused_ordering(582) 00:13:34.644 fused_ordering(583) 00:13:34.644 fused_ordering(584) 00:13:34.644 fused_ordering(585) 00:13:34.644 fused_ordering(586) 00:13:34.644 fused_ordering(587) 00:13:34.644 fused_ordering(588) 00:13:34.644 fused_ordering(589) 00:13:34.644 fused_ordering(590) 00:13:34.644 fused_ordering(591) 00:13:34.644 fused_ordering(592) 00:13:34.644 fused_ordering(593) 00:13:34.644 fused_ordering(594) 00:13:34.644 fused_ordering(595) 00:13:34.644 fused_ordering(596) 00:13:34.644 fused_ordering(597) 00:13:34.644 fused_ordering(598) 00:13:34.644 fused_ordering(599) 00:13:34.644 fused_ordering(600) 00:13:34.644 fused_ordering(601) 00:13:34.644 fused_ordering(602) 00:13:34.644 fused_ordering(603) 00:13:34.644 fused_ordering(604) 00:13:34.644 fused_ordering(605) 00:13:34.644 fused_ordering(606) 00:13:34.644 fused_ordering(607) 00:13:34.644 fused_ordering(608) 00:13:34.644 fused_ordering(609) 00:13:34.644 fused_ordering(610) 00:13:34.644 fused_ordering(611) 00:13:34.644 fused_ordering(612) 00:13:34.644 fused_ordering(613) 00:13:34.644 fused_ordering(614) 00:13:34.644 fused_ordering(615) 00:13:35.310 fused_ordering(616) 00:13:35.310 fused_ordering(617) 00:13:35.310 fused_ordering(618) 00:13:35.310 fused_ordering(619) 00:13:35.310 fused_ordering(620) 00:13:35.310 fused_ordering(621) 00:13:35.310 fused_ordering(622) 00:13:35.310 fused_ordering(623) 00:13:35.310 fused_ordering(624) 00:13:35.310 fused_ordering(625) 00:13:35.310 fused_ordering(626) 00:13:35.310 fused_ordering(627) 00:13:35.310 fused_ordering(628) 00:13:35.310 fused_ordering(629) 00:13:35.310 fused_ordering(630) 00:13:35.310 fused_ordering(631) 00:13:35.310 fused_ordering(632) 00:13:35.310 fused_ordering(633) 00:13:35.310 fused_ordering(634) 00:13:35.310 fused_ordering(635) 00:13:35.310 fused_ordering(636) 00:13:35.310 fused_ordering(637) 00:13:35.310 fused_ordering(638) 00:13:35.310 fused_ordering(639) 00:13:35.310 fused_ordering(640) 00:13:35.310 fused_ordering(641) 00:13:35.310 fused_ordering(642) 00:13:35.310 fused_ordering(643) 00:13:35.310 fused_ordering(644) 00:13:35.310 fused_ordering(645) 00:13:35.310 fused_ordering(646) 00:13:35.310 fused_ordering(647) 00:13:35.310 fused_ordering(648) 00:13:35.310 fused_ordering(649) 00:13:35.310 fused_ordering(650) 00:13:35.310 fused_ordering(651) 00:13:35.310 fused_ordering(652) 00:13:35.310 fused_ordering(653) 00:13:35.310 fused_ordering(654) 00:13:35.310 fused_ordering(655) 00:13:35.310 fused_ordering(656) 00:13:35.310 fused_ordering(657) 00:13:35.310 fused_ordering(658) 00:13:35.310 fused_ordering(659) 00:13:35.310 fused_ordering(660) 00:13:35.310 fused_ordering(661) 00:13:35.310 fused_ordering(662) 00:13:35.310 fused_ordering(663) 00:13:35.310 fused_ordering(664) 00:13:35.310 fused_ordering(665) 00:13:35.310 fused_ordering(666) 00:13:35.310 fused_ordering(667) 00:13:35.310 fused_ordering(668) 00:13:35.310 fused_ordering(669) 00:13:35.310 fused_ordering(670) 00:13:35.310 fused_ordering(671) 00:13:35.310 fused_ordering(672) 00:13:35.310 fused_ordering(673) 00:13:35.310 fused_ordering(674) 00:13:35.310 fused_ordering(675) 00:13:35.310 fused_ordering(676) 00:13:35.310 fused_ordering(677) 00:13:35.310 fused_ordering(678) 00:13:35.310 fused_ordering(679) 00:13:35.310 fused_ordering(680) 00:13:35.310 fused_ordering(681) 00:13:35.310 fused_ordering(682) 00:13:35.310 fused_ordering(683) 00:13:35.310 fused_ordering(684) 00:13:35.310 fused_ordering(685) 00:13:35.310 fused_ordering(686) 00:13:35.310 fused_ordering(687) 00:13:35.310 fused_ordering(688) 00:13:35.310 fused_ordering(689) 00:13:35.310 fused_ordering(690) 00:13:35.310 fused_ordering(691) 00:13:35.310 fused_ordering(692) 00:13:35.310 fused_ordering(693) 00:13:35.310 fused_ordering(694) 00:13:35.310 fused_ordering(695) 00:13:35.310 fused_ordering(696) 00:13:35.310 fused_ordering(697) 00:13:35.310 fused_ordering(698) 00:13:35.310 fused_ordering(699) 00:13:35.310 fused_ordering(700) 00:13:35.310 fused_ordering(701) 00:13:35.310 fused_ordering(702) 00:13:35.310 fused_ordering(703) 00:13:35.310 fused_ordering(704) 00:13:35.310 fused_ordering(705) 00:13:35.310 fused_ordering(706) 00:13:35.310 fused_ordering(707) 00:13:35.310 fused_ordering(708) 00:13:35.310 fused_ordering(709) 00:13:35.310 fused_ordering(710) 00:13:35.310 fused_ordering(711) 00:13:35.310 fused_ordering(712) 00:13:35.310 fused_ordering(713) 00:13:35.310 fused_ordering(714) 00:13:35.310 fused_ordering(715) 00:13:35.310 fused_ordering(716) 00:13:35.310 fused_ordering(717) 00:13:35.310 fused_ordering(718) 00:13:35.310 fused_ordering(719) 00:13:35.310 fused_ordering(720) 00:13:35.310 fused_ordering(721) 00:13:35.310 fused_ordering(722) 00:13:35.310 fused_ordering(723) 00:13:35.310 fused_ordering(724) 00:13:35.310 fused_ordering(725) 00:13:35.310 fused_ordering(726) 00:13:35.310 fused_ordering(727) 00:13:35.310 fused_ordering(728) 00:13:35.310 fused_ordering(729) 00:13:35.310 fused_ordering(730) 00:13:35.310 fused_ordering(731) 00:13:35.310 fused_ordering(732) 00:13:35.310 fused_ordering(733) 00:13:35.310 fused_ordering(734) 00:13:35.310 fused_ordering(735) 00:13:35.311 fused_ordering(736) 00:13:35.311 fused_ordering(737) 00:13:35.311 fused_ordering(738) 00:13:35.311 fused_ordering(739) 00:13:35.311 fused_ordering(740) 00:13:35.311 fused_ordering(741) 00:13:35.311 fused_ordering(742) 00:13:35.311 fused_ordering(743) 00:13:35.311 fused_ordering(744) 00:13:35.311 fused_ordering(745) 00:13:35.311 fused_ordering(746) 00:13:35.311 fused_ordering(747) 00:13:35.311 fused_ordering(748) 00:13:35.311 fused_ordering(749) 00:13:35.311 fused_ordering(750) 00:13:35.311 fused_ordering(751) 00:13:35.311 fused_ordering(752) 00:13:35.311 fused_ordering(753) 00:13:35.311 fused_ordering(754) 00:13:35.311 fused_ordering(755) 00:13:35.311 fused_ordering(756) 00:13:35.311 fused_ordering(757) 00:13:35.311 fused_ordering(758) 00:13:35.311 fused_ordering(759) 00:13:35.311 fused_ordering(760) 00:13:35.311 fused_ordering(761) 00:13:35.311 fused_ordering(762) 00:13:35.311 fused_ordering(763) 00:13:35.311 fused_ordering(764) 00:13:35.311 fused_ordering(765) 00:13:35.311 fused_ordering(766) 00:13:35.311 fused_ordering(767) 00:13:35.311 fused_ordering(768) 00:13:35.311 fused_ordering(769) 00:13:35.311 fused_ordering(770) 00:13:35.311 fused_ordering(771) 00:13:35.311 fused_ordering(772) 00:13:35.311 fused_ordering(773) 00:13:35.311 fused_ordering(774) 00:13:35.311 fused_ordering(775) 00:13:35.311 fused_ordering(776) 00:13:35.311 fused_ordering(777) 00:13:35.311 fused_ordering(778) 00:13:35.311 fused_ordering(779) 00:13:35.311 fused_ordering(780) 00:13:35.311 fused_ordering(781) 00:13:35.311 fused_ordering(782) 00:13:35.311 fused_ordering(783) 00:13:35.311 fused_ordering(784) 00:13:35.311 fused_ordering(785) 00:13:35.311 fused_ordering(786) 00:13:35.311 fused_ordering(787) 00:13:35.311 fused_ordering(788) 00:13:35.311 fused_ordering(789) 00:13:35.311 fused_ordering(790) 00:13:35.311 fused_ordering(791) 00:13:35.311 fused_ordering(792) 00:13:35.311 fused_ordering(793) 00:13:35.311 fused_ordering(794) 00:13:35.311 fused_ordering(795) 00:13:35.311 fused_ordering(796) 00:13:35.311 fused_ordering(797) 00:13:35.311 fused_ordering(798) 00:13:35.311 fused_ordering(799) 00:13:35.311 fused_ordering(800) 00:13:35.311 fused_ordering(801) 00:13:35.311 fused_ordering(802) 00:13:35.311 fused_ordering(803) 00:13:35.311 fused_ordering(804) 00:13:35.311 fused_ordering(805) 00:13:35.311 fused_ordering(806) 00:13:35.311 fused_ordering(807) 00:13:35.311 fused_ordering(808) 00:13:35.311 fused_ordering(809) 00:13:35.311 fused_ordering(810) 00:13:35.311 fused_ordering(811) 00:13:35.311 fused_ordering(812) 00:13:35.311 fused_ordering(813) 00:13:35.311 fused_ordering(814) 00:13:35.311 fused_ordering(815) 00:13:35.311 fused_ordering(816) 00:13:35.311 fused_ordering(817) 00:13:35.311 fused_ordering(818) 00:13:35.311 fused_ordering(819) 00:13:35.311 fused_ordering(820) 00:13:35.569 fused_ordering(821) 00:13:35.569 fused_ordering(822) 00:13:35.569 fused_ordering(823) 00:13:35.569 fused_ordering(824) 00:13:35.569 fused_ordering(825) 00:13:35.569 fused_ordering(826) 00:13:35.569 fused_ordering(827) 00:13:35.569 fused_ordering(828) 00:13:35.569 fused_ordering(829) 00:13:35.569 fused_ordering(830) 00:13:35.569 fused_ordering(831) 00:13:35.569 fused_ordering(832) 00:13:35.569 fused_ordering(833) 00:13:35.569 fused_ordering(834) 00:13:35.569 fused_ordering(835) 00:13:35.569 fused_ordering(836) 00:13:35.569 fused_ordering(837) 00:13:35.569 fused_ordering(838) 00:13:35.569 fused_ordering(839) 00:13:35.569 fused_ordering(840) 00:13:35.569 fused_ordering(841) 00:13:35.569 fused_ordering(842) 00:13:35.569 fused_ordering(843) 00:13:35.569 fused_ordering(844) 00:13:35.569 fused_ordering(845) 00:13:35.569 fused_ordering(846) 00:13:35.569 fused_ordering(847) 00:13:35.569 fused_ordering(848) 00:13:35.569 fused_ordering(849) 00:13:35.569 fused_ordering(850) 00:13:35.569 fused_ordering(851) 00:13:35.569 fused_ordering(852) 00:13:35.569 fused_ordering(853) 00:13:35.569 fused_ordering(854) 00:13:35.569 fused_ordering(855) 00:13:35.569 fused_ordering(856) 00:13:35.569 fused_ordering(857) 00:13:35.569 fused_ordering(858) 00:13:35.569 fused_ordering(859) 00:13:35.569 fused_ordering(860) 00:13:35.569 fused_ordering(861) 00:13:35.569 fused_ordering(862) 00:13:35.569 fused_ordering(863) 00:13:35.569 fused_ordering(864) 00:13:35.569 fused_ordering(865) 00:13:35.569 fused_ordering(866) 00:13:35.569 fused_ordering(867) 00:13:35.569 fused_ordering(868) 00:13:35.569 fused_ordering(869) 00:13:35.569 fused_ordering(870) 00:13:35.569 fused_ordering(871) 00:13:35.569 fused_ordering(872) 00:13:35.569 fused_ordering(873) 00:13:35.569 fused_ordering(874) 00:13:35.570 fused_ordering(875) 00:13:35.570 fused_ordering(876) 00:13:35.570 fused_ordering(877) 00:13:35.570 fused_ordering(878) 00:13:35.570 fused_ordering(879) 00:13:35.570 fused_ordering(880) 00:13:35.570 fused_ordering(881) 00:13:35.570 fused_ordering(882) 00:13:35.570 fused_ordering(883) 00:13:35.570 fused_ordering(884) 00:13:35.570 fused_ordering(885) 00:13:35.570 fused_ordering(886) 00:13:35.570 fused_ordering(887) 00:13:35.570 fused_ordering(888) 00:13:35.570 fused_ordering(889) 00:13:35.570 fused_ordering(890) 00:13:35.570 fused_ordering(891) 00:13:35.570 fused_ordering(892) 00:13:35.570 fused_ordering(893) 00:13:35.570 fused_ordering(894) 00:13:35.570 fused_ordering(895) 00:13:35.570 fused_ordering(896) 00:13:35.570 fused_ordering(897) 00:13:35.570 fused_ordering(898) 00:13:35.570 fused_ordering(899) 00:13:35.570 fused_ordering(900) 00:13:35.570 fused_ordering(901) 00:13:35.570 fused_ordering(902) 00:13:35.570 fused_ordering(903) 00:13:35.570 fused_ordering(904) 00:13:35.570 fused_ordering(905) 00:13:35.570 fused_ordering(906) 00:13:35.570 fused_ordering(907) 00:13:35.570 fused_ordering(908) 00:13:35.570 fused_ordering(909) 00:13:35.570 fused_ordering(910) 00:13:35.570 fused_ordering(911) 00:13:35.570 fused_ordering(912) 00:13:35.570 fused_ordering(913) 00:13:35.570 fused_ordering(914) 00:13:35.570 fused_ordering(915) 00:13:35.570 fused_ordering(916) 00:13:35.570 fused_ordering(917) 00:13:35.570 fused_ordering(918) 00:13:35.570 fused_ordering(919) 00:13:35.570 fused_ordering(920) 00:13:35.570 fused_ordering(921) 00:13:35.570 fused_ordering(922) 00:13:35.570 fused_ordering(923) 00:13:35.570 fused_ordering(924) 00:13:35.570 fused_ordering(925) 00:13:35.570 fused_ordering(926) 00:13:35.570 fused_ordering(927) 00:13:35.570 fused_ordering(928) 00:13:35.570 fused_ordering(929) 00:13:35.570 fused_ordering(930) 00:13:35.570 fused_ordering(931) 00:13:35.570 fused_ordering(932) 00:13:35.570 fused_ordering(933) 00:13:35.570 fused_ordering(934) 00:13:35.570 fused_ordering(935) 00:13:35.570 fused_ordering(936) 00:13:35.570 fused_ordering(937) 00:13:35.570 fused_ordering(938) 00:13:35.570 fused_ordering(939) 00:13:35.570 fused_ordering(940) 00:13:35.570 fused_ordering(941) 00:13:35.570 fused_ordering(942) 00:13:35.570 fused_ordering(943) 00:13:35.570 fused_ordering(944) 00:13:35.570 fused_ordering(945) 00:13:35.570 fused_ordering(946) 00:13:35.570 fused_ordering(947) 00:13:35.570 fused_ordering(948) 00:13:35.570 fused_ordering(949) 00:13:35.570 fused_ordering(950) 00:13:35.570 fused_ordering(951) 00:13:35.570 fused_ordering(952) 00:13:35.570 fused_ordering(953) 00:13:35.570 fused_ordering(954) 00:13:35.570 fused_ordering(955) 00:13:35.570 fused_ordering(956) 00:13:35.570 fused_ordering(957) 00:13:35.570 fused_ordering(958) 00:13:35.570 fused_ordering(959) 00:13:35.570 fused_ordering(960) 00:13:35.570 fused_ordering(961) 00:13:35.570 fused_ordering(962) 00:13:35.570 fused_ordering(963) 00:13:35.570 fused_ordering(964) 00:13:35.570 fused_ordering(965) 00:13:35.570 fused_ordering(966) 00:13:35.570 fused_ordering(967) 00:13:35.570 fused_ordering(968) 00:13:35.570 fused_ordering(969) 00:13:35.570 fused_ordering(970) 00:13:35.570 fused_ordering(971) 00:13:35.570 fused_ordering(972) 00:13:35.570 fused_ordering(973) 00:13:35.570 fused_ordering(974) 00:13:35.570 fused_ordering(975) 00:13:35.570 fused_ordering(976) 00:13:35.570 fused_ordering(977) 00:13:35.570 fused_ordering(978) 00:13:35.570 fused_ordering(979) 00:13:35.570 fused_ordering(980) 00:13:35.570 fused_ordering(981) 00:13:35.570 fused_ordering(982) 00:13:35.570 fused_ordering(983) 00:13:35.570 fused_ordering(984) 00:13:35.570 fused_ordering(985) 00:13:35.570 fused_ordering(986) 00:13:35.570 fused_ordering(987) 00:13:35.570 fused_ordering(988) 00:13:35.570 fused_ordering(989) 00:13:35.570 fused_ordering(990) 00:13:35.570 fused_ordering(991) 00:13:35.570 fused_ordering(992) 00:13:35.570 fused_ordering(993) 00:13:35.570 fused_ordering(994) 00:13:35.570 fused_ordering(995) 00:13:35.570 fused_ordering(996) 00:13:35.570 fused_ordering(997) 00:13:35.570 fused_ordering(998) 00:13:35.570 fused_ordering(999) 00:13:35.570 fused_ordering(1000) 00:13:35.570 fused_ordering(1001) 00:13:35.570 fused_ordering(1002) 00:13:35.570 fused_ordering(1003) 00:13:35.570 fused_ordering(1004) 00:13:35.570 fused_ordering(1005) 00:13:35.570 fused_ordering(1006) 00:13:35.570 fused_ordering(1007) 00:13:35.570 fused_ordering(1008) 00:13:35.570 fused_ordering(1009) 00:13:35.570 fused_ordering(1010) 00:13:35.570 fused_ordering(1011) 00:13:35.570 fused_ordering(1012) 00:13:35.570 fused_ordering(1013) 00:13:35.570 fused_ordering(1014) 00:13:35.570 fused_ordering(1015) 00:13:35.570 fused_ordering(1016) 00:13:35.570 fused_ordering(1017) 00:13:35.570 fused_ordering(1018) 00:13:35.570 fused_ordering(1019) 00:13:35.570 fused_ordering(1020) 00:13:35.570 fused_ordering(1021) 00:13:35.570 fused_ordering(1022) 00:13:35.570 fused_ordering(1023) 00:13:35.570 07:19:37 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:35.570 07:19:37 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:35.570 07:19:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:35.570 07:19:37 -- nvmf/common.sh@116 -- # sync 00:13:35.570 07:19:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:35.570 07:19:37 -- nvmf/common.sh@119 -- # set +e 00:13:35.570 07:19:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:35.570 07:19:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:35.570 rmmod nvme_tcp 00:13:35.570 rmmod nvme_fabrics 00:13:35.829 rmmod nvme_keyring 00:13:35.829 07:19:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:35.829 07:19:37 -- nvmf/common.sh@123 -- # set -e 00:13:35.829 07:19:37 -- nvmf/common.sh@124 -- # return 0 00:13:35.829 07:19:37 -- nvmf/common.sh@477 -- # '[' -n 82028 ']' 00:13:35.829 07:19:37 -- nvmf/common.sh@478 -- # killprocess 82028 00:13:35.829 07:19:37 -- common/autotest_common.sh@926 -- # '[' -z 82028 ']' 00:13:35.829 07:19:37 -- common/autotest_common.sh@930 -- # kill -0 82028 00:13:35.829 07:19:37 -- common/autotest_common.sh@931 -- # uname 00:13:35.829 07:19:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:35.829 07:19:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82028 00:13:35.829 07:19:37 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:35.829 07:19:37 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:35.829 killing process with pid 82028 00:13:35.829 07:19:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82028' 00:13:35.829 07:19:37 -- common/autotest_common.sh@945 -- # kill 82028 00:13:35.829 07:19:37 -- common/autotest_common.sh@950 -- # wait 82028 00:13:36.088 07:19:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:36.088 07:19:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:36.088 07:19:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:36.088 07:19:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:36.088 07:19:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:36.088 07:19:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.088 07:19:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:36.088 07:19:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.088 07:19:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:36.088 00:13:36.088 real 0m3.823s 00:13:36.088 user 0m4.248s 00:13:36.088 sys 0m1.493s 00:13:36.088 07:19:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:36.088 07:19:37 -- common/autotest_common.sh@10 -- # set +x 00:13:36.088 ************************************ 00:13:36.088 END TEST nvmf_fused_ordering 00:13:36.088 ************************************ 00:13:36.088 07:19:37 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:36.088 07:19:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:36.088 07:19:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:36.088 07:19:37 -- common/autotest_common.sh@10 -- # set +x 00:13:36.088 ************************************ 00:13:36.088 START TEST nvmf_delete_subsystem 00:13:36.088 ************************************ 00:13:36.088 07:19:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:36.088 * Looking for test storage... 00:13:36.088 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:36.088 07:19:37 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:36.088 07:19:37 -- nvmf/common.sh@7 -- # uname -s 00:13:36.088 07:19:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.088 07:19:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.088 07:19:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.088 07:19:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.088 07:19:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:36.088 07:19:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:36.088 07:19:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.088 07:19:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:36.088 07:19:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.088 07:19:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:36.088 07:19:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:13:36.088 07:19:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:13:36.088 07:19:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.088 07:19:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:36.088 07:19:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:36.088 07:19:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:36.088 07:19:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.088 07:19:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.088 07:19:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.088 07:19:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.088 07:19:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.088 07:19:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.088 07:19:37 -- paths/export.sh@5 -- # export PATH 00:13:36.088 07:19:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.088 07:19:37 -- nvmf/common.sh@46 -- # : 0 00:13:36.088 07:19:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:36.088 07:19:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:36.088 07:19:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:36.088 07:19:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.088 07:19:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.088 07:19:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:36.088 07:19:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:36.088 07:19:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:36.088 07:19:37 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:13:36.088 07:19:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:36.088 07:19:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:36.088 07:19:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:36.088 07:19:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:36.088 07:19:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:36.088 07:19:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.088 07:19:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:36.088 07:19:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.347 07:19:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:36.347 07:19:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:36.347 07:19:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:36.347 07:19:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:36.347 07:19:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:36.347 07:19:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:36.347 07:19:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:36.347 07:19:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:36.347 07:19:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:36.347 07:19:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:36.347 07:19:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:36.347 07:19:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:36.347 07:19:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:36.347 07:19:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:36.347 07:19:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:36.347 07:19:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:36.347 07:19:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:36.347 07:19:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:36.347 07:19:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:36.347 07:19:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:36.347 Cannot find device "nvmf_tgt_br" 00:13:36.347 07:19:37 -- nvmf/common.sh@154 -- # true 00:13:36.347 07:19:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:36.347 Cannot find device "nvmf_tgt_br2" 00:13:36.347 07:19:37 -- nvmf/common.sh@155 -- # true 00:13:36.347 07:19:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:36.347 07:19:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:36.347 Cannot find device "nvmf_tgt_br" 00:13:36.347 07:19:37 -- nvmf/common.sh@157 -- # true 00:13:36.347 07:19:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:36.347 Cannot find device "nvmf_tgt_br2" 00:13:36.347 07:19:37 -- nvmf/common.sh@158 -- # true 00:13:36.347 07:19:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:36.347 07:19:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:36.347 07:19:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:36.347 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:36.347 07:19:38 -- nvmf/common.sh@161 -- # true 00:13:36.347 07:19:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:36.347 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:36.347 07:19:38 -- nvmf/common.sh@162 -- # true 00:13:36.347 07:19:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:36.347 07:19:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:36.347 07:19:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:36.347 07:19:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:36.347 07:19:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:36.347 07:19:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:36.347 07:19:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:36.347 07:19:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:36.348 07:19:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:36.348 07:19:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:36.348 07:19:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:36.348 07:19:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:36.348 07:19:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:36.348 07:19:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:36.348 07:19:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:36.348 07:19:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:36.348 07:19:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:36.348 07:19:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:36.348 07:19:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:36.606 07:19:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:36.606 07:19:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:36.606 07:19:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:36.606 07:19:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:36.606 07:19:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:36.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:36.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:13:36.606 00:13:36.606 --- 10.0.0.2 ping statistics --- 00:13:36.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.606 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:13:36.606 07:19:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:36.606 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:36.606 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:13:36.606 00:13:36.606 --- 10.0.0.3 ping statistics --- 00:13:36.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.606 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:13:36.606 07:19:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:36.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:36.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:13:36.606 00:13:36.606 --- 10.0.0.1 ping statistics --- 00:13:36.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.606 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:13:36.606 07:19:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:36.606 07:19:38 -- nvmf/common.sh@421 -- # return 0 00:13:36.606 07:19:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:36.606 07:19:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:36.606 07:19:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:36.606 07:19:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:36.606 07:19:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:36.606 07:19:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:36.606 07:19:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:36.606 07:19:38 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:36.606 07:19:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:36.606 07:19:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:36.606 07:19:38 -- common/autotest_common.sh@10 -- # set +x 00:13:36.606 07:19:38 -- nvmf/common.sh@469 -- # nvmfpid=82290 00:13:36.606 07:19:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:36.606 07:19:38 -- nvmf/common.sh@470 -- # waitforlisten 82290 00:13:36.606 07:19:38 -- common/autotest_common.sh@819 -- # '[' -z 82290 ']' 00:13:36.606 07:19:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.606 07:19:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:36.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.606 07:19:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.606 07:19:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:36.606 07:19:38 -- common/autotest_common.sh@10 -- # set +x 00:13:36.606 [2024-11-04 07:19:38.330480] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:36.606 [2024-11-04 07:19:38.330563] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:36.865 [2024-11-04 07:19:38.465902] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:36.865 [2024-11-04 07:19:38.522738] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:36.865 [2024-11-04 07:19:38.522895] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:36.865 [2024-11-04 07:19:38.522909] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:36.865 [2024-11-04 07:19:38.522918] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:36.865 [2024-11-04 07:19:38.523480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:36.865 [2024-11-04 07:19:38.523531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.812 07:19:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:37.812 07:19:39 -- common/autotest_common.sh@852 -- # return 0 00:13:37.812 07:19:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:37.812 07:19:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:37.812 07:19:39 -- common/autotest_common.sh@10 -- # set +x 00:13:37.812 07:19:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:37.812 07:19:39 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:37.812 07:19:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:37.812 07:19:39 -- common/autotest_common.sh@10 -- # set +x 00:13:37.812 [2024-11-04 07:19:39.393944] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:37.812 07:19:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:37.812 07:19:39 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:37.812 07:19:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:37.812 07:19:39 -- common/autotest_common.sh@10 -- # set +x 00:13:37.812 07:19:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:37.812 07:19:39 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:37.812 07:19:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:37.812 07:19:39 -- common/autotest_common.sh@10 -- # set +x 00:13:37.812 [2024-11-04 07:19:39.410237] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:37.812 07:19:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:37.812 07:19:39 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:37.812 07:19:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:37.812 07:19:39 -- common/autotest_common.sh@10 -- # set +x 00:13:37.812 NULL1 00:13:37.812 07:19:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:37.812 07:19:39 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:37.812 07:19:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:37.812 07:19:39 -- common/autotest_common.sh@10 -- # set +x 00:13:37.812 Delay0 00:13:37.812 07:19:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:37.812 07:19:39 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.812 07:19:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:37.812 07:19:39 -- common/autotest_common.sh@10 -- # set +x 00:13:37.812 07:19:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:37.812 07:19:39 -- target/delete_subsystem.sh@28 -- # perf_pid=82341 00:13:37.812 07:19:39 -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:37.812 07:19:39 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:37.812 [2024-11-04 07:19:39.594831] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:39.714 07:19:41 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:39.714 07:19:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.714 07:19:41 -- common/autotest_common.sh@10 -- # set +x 00:13:39.973 Write completed with error (sct=0, sc=8) 00:13:39.973 Read completed with error (sct=0, sc=8) 00:13:39.973 Write completed with error (sct=0, sc=8) 00:13:39.973 Read completed with error (sct=0, sc=8) 00:13:39.973 starting I/O failed: -6 00:13:39.973 Read completed with error (sct=0, sc=8) 00:13:39.973 Write completed with error (sct=0, sc=8) 00:13:39.973 Read completed with error (sct=0, sc=8) 00:13:39.973 Write completed with error (sct=0, sc=8) 00:13:39.973 starting I/O failed: -6 00:13:39.973 Write completed with error (sct=0, sc=8) 00:13:39.973 Read completed with error (sct=0, sc=8) 00:13:39.973 Write completed with error (sct=0, sc=8) 00:13:39.973 Read completed with error (sct=0, sc=8) 00:13:39.973 starting I/O failed: -6 00:13:39.973 Write completed with error (sct=0, sc=8) 00:13:39.973 Read completed with error (sct=0, sc=8) 00:13:39.973 Read completed with error (sct=0, sc=8) 00:13:39.973 Write completed with error (sct=0, sc=8) 00:13:39.973 starting I/O failed: -6 00:13:39.973 Read completed with error (sct=0, sc=8) 00:13:39.973 Read completed with error (sct=0, sc=8) 00:13:39.973 Read completed with error (sct=0, sc=8) 00:13:39.973 Read completed with error (sct=0, sc=8) 00:13:39.973 starting I/O failed: -6 00:13:39.973 Read completed with error (sct=0, sc=8) 00:13:39.973 Write completed with error (sct=0, sc=8) 00:13:39.973 Read completed with error (sct=0, sc=8) 00:13:39.973 Read completed with error (sct=0, sc=8) 00:13:39.973 starting I/O failed: -6 00:13:39.973 Read completed with error (sct=0, sc=8) 00:13:39.973 Read completed with error (sct=0, sc=8) 00:13:39.973 Read completed with error (sct=0, sc=8) 00:13:39.973 Read completed with error (sct=0, sc=8) 00:13:39.973 starting I/O failed: -6 00:13:39.973 Read completed with error (sct=0, sc=8) 00:13:39.973 Write completed with error (sct=0, sc=8) 00:13:39.973 Write completed with error (sct=0, sc=8) 00:13:39.973 Write completed with error (sct=0, sc=8) 00:13:39.973 starting I/O failed: -6 00:13:39.973 Read completed with error (sct=0, sc=8) 00:13:39.973 Read completed with error (sct=0, sc=8) 00:13:39.973 Read completed with error (sct=0, sc=8) 00:13:39.973 Read completed with error (sct=0, sc=8) 00:13:39.973 starting I/O failed: -6 00:13:39.973 Write completed with error (sct=0, sc=8) 00:13:39.973 Read completed with error (sct=0, sc=8) 00:13:39.973 Read completed with error (sct=0, sc=8) 00:13:39.973 Read completed with error (sct=0, sc=8) 00:13:39.973 starting I/O failed: -6 00:13:39.973 Read completed with error (sct=0, sc=8) 00:13:39.973 Read completed with error (sct=0, sc=8) 00:13:39.973 Write completed with error (sct=0, sc=8) 00:13:39.973 Write completed with error (sct=0, sc=8) 00:13:39.973 starting I/O failed: -6 00:13:39.973 Read completed with error (sct=0, sc=8) 00:13:39.973 [2024-11-04 07:19:41.637310] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493e70 is same with the state(5) to be set 00:13:39.973 Read completed with error (sct=0, sc=8) 00:13:39.973 Read completed with error (sct=0, sc=8) 00:13:39.973 Write completed with error (sct=0, sc=8) 00:13:39.973 Read completed with error (sct=0, sc=8) 00:13:39.973 Read completed with error (sct=0, sc=8) 00:13:39.973 Write completed with error (sct=0, sc=8) 00:13:39.973 Read completed with error (sct=0, sc=8) 00:13:39.973 Read completed with error (sct=0, sc=8) 00:13:39.973 Write completed with error (sct=0, sc=8) 00:13:39.973 Write completed with error (sct=0, sc=8) 00:13:39.973 Write completed with error (sct=0, sc=8) 00:13:39.973 Read completed with error (sct=0, sc=8) 00:13:39.973 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 [2024-11-04 07:19:41.638099] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493870 is same with the state(5) to be set 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 starting I/O failed: -6 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 starting I/O failed: -6 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 starting I/O failed: -6 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 starting I/O failed: -6 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 starting I/O failed: -6 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 starting I/O failed: -6 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 starting I/O failed: -6 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 starting I/O failed: -6 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 starting I/O failed: -6 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 [2024-11-04 07:19:41.639180] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe1fc000c00 is same with the state(5) to be set 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:39.974 Read completed with error (sct=0, sc=8) 00:13:39.974 Write completed with error (sct=0, sc=8) 00:13:40.910 [2024-11-04 07:19:42.609487] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492070 is same with the state(5) to be set 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Write completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Write completed with error (sct=0, sc=8) 00:13:40.910 Write completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Write completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 [2024-11-04 07:19:42.632518] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe1fc00bf20 is same with the state(5) to be set 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Write completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Write completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Write completed with error (sct=0, sc=8) 00:13:40.910 Write completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Write completed with error (sct=0, sc=8) 00:13:40.910 Write completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Write completed with error (sct=0, sc=8) 00:13:40.910 Write completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Write completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Write completed with error (sct=0, sc=8) 00:13:40.910 Write completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Write completed with error (sct=0, sc=8) 00:13:40.910 Write completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 [2024-11-04 07:19:42.633123] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe1fc00c600 is same with the state(5) to be set 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Write completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Write completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Write completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Write completed with error (sct=0, sc=8) 00:13:40.910 Write completed with error (sct=0, sc=8) 00:13:40.910 Write completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 [2024-11-04 07:19:42.638820] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493bc0 is same with the state(5) to be set 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Write completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Write completed with error (sct=0, sc=8) 00:13:40.910 Write completed with error (sct=0, sc=8) 00:13:40.910 Write completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Write completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Write completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 Read completed with error (sct=0, sc=8) 00:13:40.910 [2024-11-04 07:19:42.639729] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2494120 is same with the state(5) to be set 00:13:40.910 07:19:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.910 07:19:42 -- target/delete_subsystem.sh@34 -- # delay=0 00:13:40.910 07:19:42 -- target/delete_subsystem.sh@35 -- # kill -0 82341 00:13:40.910 07:19:42 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:40.911 [2024-11-04 07:19:42.642307] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2492070 (9): Bad file descriptor 00:13:40.911 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:40.911 Initializing NVMe Controllers 00:13:40.911 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:40.911 Controller IO queue size 128, less than required. 00:13:40.911 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:40.911 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:40.911 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:40.911 Initialization complete. Launching workers. 00:13:40.911 ======================================================== 00:13:40.911 Latency(us) 00:13:40.911 Device Information : IOPS MiB/s Average min max 00:13:40.911 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 168.75 0.08 899361.97 811.94 1015838.99 00:13:40.911 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 148.95 0.07 1071119.99 945.78 2001725.09 00:13:40.911 ======================================================== 00:13:40.911 Total : 317.70 0.16 979890.26 811.94 2001725.09 00:13:40.911 00:13:41.479 07:19:43 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:41.479 07:19:43 -- target/delete_subsystem.sh@35 -- # kill -0 82341 00:13:41.479 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (82341) - No such process 00:13:41.479 07:19:43 -- target/delete_subsystem.sh@45 -- # NOT wait 82341 00:13:41.479 07:19:43 -- common/autotest_common.sh@640 -- # local es=0 00:13:41.479 07:19:43 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 82341 00:13:41.479 07:19:43 -- common/autotest_common.sh@628 -- # local arg=wait 00:13:41.479 07:19:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:41.479 07:19:43 -- common/autotest_common.sh@632 -- # type -t wait 00:13:41.479 07:19:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:41.479 07:19:43 -- common/autotest_common.sh@643 -- # wait 82341 00:13:41.479 07:19:43 -- common/autotest_common.sh@643 -- # es=1 00:13:41.479 07:19:43 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:41.479 07:19:43 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:41.479 07:19:43 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:41.479 07:19:43 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:41.479 07:19:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:41.479 07:19:43 -- common/autotest_common.sh@10 -- # set +x 00:13:41.479 07:19:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:41.479 07:19:43 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:41.479 07:19:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:41.479 07:19:43 -- common/autotest_common.sh@10 -- # set +x 00:13:41.479 [2024-11-04 07:19:43.164446] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:41.479 07:19:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:41.479 07:19:43 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.479 07:19:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:41.479 07:19:43 -- common/autotest_common.sh@10 -- # set +x 00:13:41.479 07:19:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:41.479 07:19:43 -- target/delete_subsystem.sh@54 -- # perf_pid=82387 00:13:41.479 07:19:43 -- target/delete_subsystem.sh@56 -- # delay=0 00:13:41.479 07:19:43 -- target/delete_subsystem.sh@57 -- # kill -0 82387 00:13:41.479 07:19:43 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:41.479 07:19:43 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:41.738 [2024-11-04 07:19:43.332814] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:41.997 07:19:43 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:41.997 07:19:43 -- target/delete_subsystem.sh@57 -- # kill -0 82387 00:13:41.997 07:19:43 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:42.565 07:19:44 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:42.565 07:19:44 -- target/delete_subsystem.sh@57 -- # kill -0 82387 00:13:42.565 07:19:44 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:43.132 07:19:44 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:43.132 07:19:44 -- target/delete_subsystem.sh@57 -- # kill -0 82387 00:13:43.132 07:19:44 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:43.391 07:19:45 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:43.391 07:19:45 -- target/delete_subsystem.sh@57 -- # kill -0 82387 00:13:43.391 07:19:45 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:43.958 07:19:45 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:43.958 07:19:45 -- target/delete_subsystem.sh@57 -- # kill -0 82387 00:13:43.958 07:19:45 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:44.525 07:19:46 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:44.525 07:19:46 -- target/delete_subsystem.sh@57 -- # kill -0 82387 00:13:44.525 07:19:46 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:44.783 Initializing NVMe Controllers 00:13:44.783 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:44.783 Controller IO queue size 128, less than required. 00:13:44.783 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:44.783 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:44.783 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:44.783 Initialization complete. Launching workers. 00:13:44.783 ======================================================== 00:13:44.783 Latency(us) 00:13:44.783 Device Information : IOPS MiB/s Average min max 00:13:44.783 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003951.30 1000155.85 1014785.81 00:13:44.783 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006603.25 1000182.96 1040901.11 00:13:44.783 ======================================================== 00:13:44.783 Total : 256.00 0.12 1005277.28 1000155.85 1040901.11 00:13:44.783 00:13:45.043 07:19:46 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:45.043 07:19:46 -- target/delete_subsystem.sh@57 -- # kill -0 82387 00:13:45.043 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (82387) - No such process 00:13:45.043 07:19:46 -- target/delete_subsystem.sh@67 -- # wait 82387 00:13:45.043 07:19:46 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:45.043 07:19:46 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:13:45.043 07:19:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:45.043 07:19:46 -- nvmf/common.sh@116 -- # sync 00:13:45.043 07:19:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:45.043 07:19:46 -- nvmf/common.sh@119 -- # set +e 00:13:45.043 07:19:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:45.043 07:19:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:45.043 rmmod nvme_tcp 00:13:45.043 rmmod nvme_fabrics 00:13:45.043 rmmod nvme_keyring 00:13:45.043 07:19:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:45.043 07:19:46 -- nvmf/common.sh@123 -- # set -e 00:13:45.043 07:19:46 -- nvmf/common.sh@124 -- # return 0 00:13:45.043 07:19:46 -- nvmf/common.sh@477 -- # '[' -n 82290 ']' 00:13:45.043 07:19:46 -- nvmf/common.sh@478 -- # killprocess 82290 00:13:45.043 07:19:46 -- common/autotest_common.sh@926 -- # '[' -z 82290 ']' 00:13:45.043 07:19:46 -- common/autotest_common.sh@930 -- # kill -0 82290 00:13:45.043 07:19:46 -- common/autotest_common.sh@931 -- # uname 00:13:45.043 07:19:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:45.043 07:19:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82290 00:13:45.043 killing process with pid 82290 00:13:45.043 07:19:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:45.043 07:19:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:45.043 07:19:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82290' 00:13:45.043 07:19:46 -- common/autotest_common.sh@945 -- # kill 82290 00:13:45.043 07:19:46 -- common/autotest_common.sh@950 -- # wait 82290 00:13:45.303 07:19:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:45.303 07:19:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:45.303 07:19:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:45.303 07:19:47 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:45.303 07:19:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:45.303 07:19:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.303 07:19:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:45.303 07:19:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.303 07:19:47 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:45.303 00:13:45.303 real 0m9.270s 00:13:45.303 user 0m29.249s 00:13:45.303 sys 0m1.119s 00:13:45.303 07:19:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:45.303 07:19:47 -- common/autotest_common.sh@10 -- # set +x 00:13:45.303 ************************************ 00:13:45.303 END TEST nvmf_delete_subsystem 00:13:45.303 ************************************ 00:13:45.303 07:19:47 -- nvmf/nvmf.sh@36 -- # [[ 0 -eq 1 ]] 00:13:45.303 07:19:47 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:13:45.303 07:19:47 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:45.303 07:19:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:45.303 07:19:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:45.303 07:19:47 -- common/autotest_common.sh@10 -- # set +x 00:13:45.303 ************************************ 00:13:45.303 START TEST nvmf_host_management 00:13:45.303 ************************************ 00:13:45.583 07:19:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:45.584 * Looking for test storage... 00:13:45.584 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:45.584 07:19:47 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:45.584 07:19:47 -- nvmf/common.sh@7 -- # uname -s 00:13:45.584 07:19:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:45.584 07:19:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:45.584 07:19:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:45.584 07:19:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:45.584 07:19:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:45.584 07:19:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:45.584 07:19:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:45.584 07:19:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:45.584 07:19:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:45.584 07:19:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:45.584 07:19:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:13:45.584 07:19:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:13:45.584 07:19:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:45.584 07:19:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:45.584 07:19:47 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:45.584 07:19:47 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:45.584 07:19:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:45.584 07:19:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:45.584 07:19:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:45.584 07:19:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.584 07:19:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.584 07:19:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.584 07:19:47 -- paths/export.sh@5 -- # export PATH 00:13:45.584 07:19:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.584 07:19:47 -- nvmf/common.sh@46 -- # : 0 00:13:45.584 07:19:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:45.584 07:19:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:45.584 07:19:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:45.584 07:19:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:45.584 07:19:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:45.584 07:19:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:45.584 07:19:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:45.584 07:19:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:45.584 07:19:47 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:45.584 07:19:47 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:45.584 07:19:47 -- target/host_management.sh@104 -- # nvmftestinit 00:13:45.584 07:19:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:45.584 07:19:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:45.584 07:19:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:45.584 07:19:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:45.584 07:19:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:45.584 07:19:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.584 07:19:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:45.584 07:19:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.584 07:19:47 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:45.584 07:19:47 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:45.584 07:19:47 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:45.584 07:19:47 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:45.584 07:19:47 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:45.584 07:19:47 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:45.584 07:19:47 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:45.584 07:19:47 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:45.584 07:19:47 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:45.584 07:19:47 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:45.584 07:19:47 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:45.584 07:19:47 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:45.584 07:19:47 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:45.584 07:19:47 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:45.584 07:19:47 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:45.584 07:19:47 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:45.584 07:19:47 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:45.584 07:19:47 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:45.584 07:19:47 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:45.584 07:19:47 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:45.584 Cannot find device "nvmf_tgt_br" 00:13:45.584 07:19:47 -- nvmf/common.sh@154 -- # true 00:13:45.584 07:19:47 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:45.584 Cannot find device "nvmf_tgt_br2" 00:13:45.584 07:19:47 -- nvmf/common.sh@155 -- # true 00:13:45.584 07:19:47 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:45.584 07:19:47 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:45.584 Cannot find device "nvmf_tgt_br" 00:13:45.584 07:19:47 -- nvmf/common.sh@157 -- # true 00:13:45.584 07:19:47 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:45.584 Cannot find device "nvmf_tgt_br2" 00:13:45.584 07:19:47 -- nvmf/common.sh@158 -- # true 00:13:45.584 07:19:47 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:45.584 07:19:47 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:45.584 07:19:47 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:45.584 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:45.584 07:19:47 -- nvmf/common.sh@161 -- # true 00:13:45.584 07:19:47 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:45.584 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:45.584 07:19:47 -- nvmf/common.sh@162 -- # true 00:13:45.584 07:19:47 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:45.584 07:19:47 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:45.584 07:19:47 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:45.584 07:19:47 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:45.584 07:19:47 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:45.850 07:19:47 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:45.850 07:19:47 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:45.850 07:19:47 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:45.850 07:19:47 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:45.850 07:19:47 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:45.850 07:19:47 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:45.850 07:19:47 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:45.850 07:19:47 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:45.850 07:19:47 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:45.850 07:19:47 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:45.850 07:19:47 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:45.850 07:19:47 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:45.850 07:19:47 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:45.850 07:19:47 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:45.850 07:19:47 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:45.850 07:19:47 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:45.850 07:19:47 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:45.850 07:19:47 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:45.850 07:19:47 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:45.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:45.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:13:45.850 00:13:45.850 --- 10.0.0.2 ping statistics --- 00:13:45.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.851 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:13:45.851 07:19:47 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:45.851 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:45.851 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:13:45.851 00:13:45.851 --- 10.0.0.3 ping statistics --- 00:13:45.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.851 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:13:45.851 07:19:47 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:45.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:45.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:13:45.851 00:13:45.851 --- 10.0.0.1 ping statistics --- 00:13:45.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.851 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:13:45.851 07:19:47 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:45.851 07:19:47 -- nvmf/common.sh@421 -- # return 0 00:13:45.851 07:19:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:45.851 07:19:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:45.851 07:19:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:45.851 07:19:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:45.851 07:19:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:45.851 07:19:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:45.851 07:19:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:45.851 07:19:47 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:13:45.851 07:19:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:45.851 07:19:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:45.851 07:19:47 -- common/autotest_common.sh@10 -- # set +x 00:13:45.851 ************************************ 00:13:45.851 START TEST nvmf_host_management 00:13:45.851 ************************************ 00:13:45.851 07:19:47 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:13:45.851 07:19:47 -- target/host_management.sh@69 -- # starttarget 00:13:45.851 07:19:47 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:45.851 07:19:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:45.851 07:19:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:45.851 07:19:47 -- common/autotest_common.sh@10 -- # set +x 00:13:45.851 07:19:47 -- nvmf/common.sh@469 -- # nvmfpid=82619 00:13:45.851 07:19:47 -- nvmf/common.sh@470 -- # waitforlisten 82619 00:13:45.851 07:19:47 -- common/autotest_common.sh@819 -- # '[' -z 82619 ']' 00:13:45.851 07:19:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.851 07:19:47 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:45.851 07:19:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:45.851 07:19:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.851 07:19:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:45.851 07:19:47 -- common/autotest_common.sh@10 -- # set +x 00:13:45.851 [2024-11-04 07:19:47.664513] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:45.851 [2024-11-04 07:19:47.664595] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:46.109 [2024-11-04 07:19:47.804644] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:46.109 [2024-11-04 07:19:47.892233] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:46.109 [2024-11-04 07:19:47.892851] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:46.109 [2024-11-04 07:19:47.893184] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:46.109 [2024-11-04 07:19:47.893441] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:46.109 [2024-11-04 07:19:47.893850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:46.109 [2024-11-04 07:19:47.894168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:46.109 [2024-11-04 07:19:47.893991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:46.109 [2024-11-04 07:19:47.894189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.051 07:19:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:47.051 07:19:48 -- common/autotest_common.sh@852 -- # return 0 00:13:47.051 07:19:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:47.051 07:19:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:47.051 07:19:48 -- common/autotest_common.sh@10 -- # set +x 00:13:47.051 07:19:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:47.051 07:19:48 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:47.051 07:19:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:47.051 07:19:48 -- common/autotest_common.sh@10 -- # set +x 00:13:47.051 [2024-11-04 07:19:48.736100] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:47.051 07:19:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:47.051 07:19:48 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:47.051 07:19:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:47.051 07:19:48 -- common/autotest_common.sh@10 -- # set +x 00:13:47.051 07:19:48 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:13:47.051 07:19:48 -- target/host_management.sh@23 -- # cat 00:13:47.051 07:19:48 -- target/host_management.sh@30 -- # rpc_cmd 00:13:47.051 07:19:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:47.051 07:19:48 -- common/autotest_common.sh@10 -- # set +x 00:13:47.051 Malloc0 00:13:47.051 [2024-11-04 07:19:48.822153] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.051 07:19:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:47.051 07:19:48 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:47.051 07:19:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:47.051 07:19:48 -- common/autotest_common.sh@10 -- # set +x 00:13:47.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:47.051 07:19:48 -- target/host_management.sh@73 -- # perfpid=82694 00:13:47.051 07:19:48 -- target/host_management.sh@74 -- # waitforlisten 82694 /var/tmp/bdevperf.sock 00:13:47.051 07:19:48 -- common/autotest_common.sh@819 -- # '[' -z 82694 ']' 00:13:47.051 07:19:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:47.051 07:19:48 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:47.051 07:19:48 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:47.051 07:19:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:47.051 07:19:48 -- nvmf/common.sh@520 -- # config=() 00:13:47.051 07:19:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:47.051 07:19:48 -- nvmf/common.sh@520 -- # local subsystem config 00:13:47.051 07:19:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:13:47.051 07:19:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:47.051 07:19:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:13:47.051 { 00:13:47.051 "params": { 00:13:47.051 "name": "Nvme$subsystem", 00:13:47.051 "trtype": "$TEST_TRANSPORT", 00:13:47.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:47.051 "adrfam": "ipv4", 00:13:47.051 "trsvcid": "$NVMF_PORT", 00:13:47.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:47.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:47.051 "hdgst": ${hdgst:-false}, 00:13:47.051 "ddgst": ${ddgst:-false} 00:13:47.051 }, 00:13:47.051 "method": "bdev_nvme_attach_controller" 00:13:47.051 } 00:13:47.051 EOF 00:13:47.051 )") 00:13:47.051 07:19:48 -- common/autotest_common.sh@10 -- # set +x 00:13:47.051 07:19:48 -- nvmf/common.sh@542 -- # cat 00:13:47.051 07:19:48 -- nvmf/common.sh@544 -- # jq . 00:13:47.051 07:19:48 -- nvmf/common.sh@545 -- # IFS=, 00:13:47.051 07:19:48 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:13:47.051 "params": { 00:13:47.051 "name": "Nvme0", 00:13:47.051 "trtype": "tcp", 00:13:47.051 "traddr": "10.0.0.2", 00:13:47.051 "adrfam": "ipv4", 00:13:47.051 "trsvcid": "4420", 00:13:47.051 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:47.051 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:47.051 "hdgst": false, 00:13:47.051 "ddgst": false 00:13:47.051 }, 00:13:47.051 "method": "bdev_nvme_attach_controller" 00:13:47.051 }' 00:13:47.310 [2024-11-04 07:19:48.928953] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:47.310 [2024-11-04 07:19:48.929052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82694 ] 00:13:47.310 [2024-11-04 07:19:49.071396] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.310 [2024-11-04 07:19:49.130280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.569 Running I/O for 10 seconds... 00:13:48.137 07:19:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:48.137 07:19:49 -- common/autotest_common.sh@852 -- # return 0 00:13:48.137 07:19:49 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:48.137 07:19:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.137 07:19:49 -- common/autotest_common.sh@10 -- # set +x 00:13:48.137 07:19:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.137 07:19:49 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:48.137 07:19:49 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:48.137 07:19:49 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:48.137 07:19:49 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:48.137 07:19:49 -- target/host_management.sh@52 -- # local ret=1 00:13:48.137 07:19:49 -- target/host_management.sh@53 -- # local i 00:13:48.137 07:19:49 -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:48.137 07:19:49 -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:48.137 07:19:49 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:48.137 07:19:49 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:48.137 07:19:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.137 07:19:49 -- common/autotest_common.sh@10 -- # set +x 00:13:48.398 07:19:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.398 07:19:50 -- target/host_management.sh@55 -- # read_io_count=2498 00:13:48.398 07:19:50 -- target/host_management.sh@58 -- # '[' 2498 -ge 100 ']' 00:13:48.398 07:19:50 -- target/host_management.sh@59 -- # ret=0 00:13:48.398 07:19:50 -- target/host_management.sh@60 -- # break 00:13:48.398 07:19:50 -- target/host_management.sh@64 -- # return 0 00:13:48.398 07:19:50 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:48.398 07:19:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.398 07:19:50 -- common/autotest_common.sh@10 -- # set +x 00:13:48.398 [2024-11-04 07:19:50.028208] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028383] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028403] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028437] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028463] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028489] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028498] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028506] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028515] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028523] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028540] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028549] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028565] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028573] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028581] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028589] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028598] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028606] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028615] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028624] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028632] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028641] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028659] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028668] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028685] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028693] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028702] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028710] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028719] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028727] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028747] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028755] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cae70 is same with the state(5) to be set 00:13:48.398 [2024-11-04 07:19:50.028889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.398 [2024-11-04 07:19:50.028927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.398 [2024-11-04 07:19:50.028964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.398 [2024-11-04 07:19:50.028975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.398 [2024-11-04 07:19:50.028986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.398 [2024-11-04 07:19:50.028995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.398 [2024-11-04 07:19:50.029005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.398 [2024-11-04 07:19:50.029014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.398 [2024-11-04 07:19:50.029024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.398 [2024-11-04 07:19:50.029049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.398 [2024-11-04 07:19:50.029060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.398 [2024-11-04 07:19:50.029070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.398 [2024-11-04 07:19:50.029081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.398 [2024-11-04 07:19:50.029090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.398 [2024-11-04 07:19:50.029100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.398 [2024-11-04 07:19:50.029109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.398 [2024-11-04 07:19:50.029119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.398 [2024-11-04 07:19:50.029129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.398 [2024-11-04 07:19:50.029140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.398 [2024-11-04 07:19:50.029148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.398 [2024-11-04 07:19:50.029158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.398 [2024-11-04 07:19:50.029167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.398 [2024-11-04 07:19:50.029177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.398 [2024-11-04 07:19:50.029186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.398 [2024-11-04 07:19:50.029196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.398 [2024-11-04 07:19:50.029204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.398 [2024-11-04 07:19:50.029215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.398 [2024-11-04 07:19:50.029224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.398 [2024-11-04 07:19:50.029234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.398 [2024-11-04 07:19:50.029243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.398 [2024-11-04 07:19:50.029253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.398 [2024-11-04 07:19:50.029262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.398 [2024-11-04 07:19:50.029273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.029282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.399 [2024-11-04 07:19:50.029292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.029301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.399 [2024-11-04 07:19:50.029312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.029337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.399 [2024-11-04 07:19:50.029347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.029357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.399 [2024-11-04 07:19:50.029368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.029377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.399 [2024-11-04 07:19:50.029388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.029396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.399 [2024-11-04 07:19:50.029407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.029416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.399 [2024-11-04 07:19:50.029427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.029437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.399 [2024-11-04 07:19:50.029463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.029472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.399 [2024-11-04 07:19:50.029482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.029507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.399 [2024-11-04 07:19:50.029518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.029527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.399 [2024-11-04 07:19:50.029538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.029546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.399 [2024-11-04 07:19:50.029571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.029596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.399 [2024-11-04 07:19:50.029612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.029622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.399 [2024-11-04 07:19:50.029632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.029640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.399 [2024-11-04 07:19:50.029651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.029659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.399 [2024-11-04 07:19:50.029670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.029678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.399 [2024-11-04 07:19:50.029688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.029697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.399 [2024-11-04 07:19:50.029708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.029716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.399 [2024-11-04 07:19:50.029726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.029734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.399 [2024-11-04 07:19:50.029745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.029754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.399 [2024-11-04 07:19:50.029764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.029773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.399 [2024-11-04 07:19:50.029784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.029792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.399 [2024-11-04 07:19:50.029803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.029812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.399 [2024-11-04 07:19:50.029822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.029831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.399 [2024-11-04 07:19:50.029841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.029850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.399 [2024-11-04 07:19:50.029861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.029869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.399 [2024-11-04 07:19:50.029880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.029889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.399 [2024-11-04 07:19:50.029899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.029909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.399 [2024-11-04 07:19:50.029921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.029930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.399 [2024-11-04 07:19:50.029941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.029949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.399 [2024-11-04 07:19:50.029977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.029985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.399 [2024-11-04 07:19:50.030006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.030016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.399 [2024-11-04 07:19:50.030027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.030036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.399 [2024-11-04 07:19:50.030046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.030055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.399 [2024-11-04 07:19:50.030066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.030074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.399 [2024-11-04 07:19:50.030084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.030108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.399 [2024-11-04 07:19:50.030119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.030129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.399 [2024-11-04 07:19:50.030140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.030149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.399 [2024-11-04 07:19:50.030159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.399 [2024-11-04 07:19:50.030168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.400 [2024-11-04 07:19:50.030179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.400 [2024-11-04 07:19:50.030197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.400 [2024-11-04 07:19:50.030208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.400 [2024-11-04 07:19:50.030217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.400 [2024-11-04 07:19:50.030227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.400 [2024-11-04 07:19:50.030236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.400 [2024-11-04 07:19:50.030246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.400 [2024-11-04 07:19:50.030257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.400 [2024-11-04 07:19:50.030268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.400 [2024-11-04 07:19:50.030276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.400 [2024-11-04 07:19:50.030291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.400 [2024-11-04 07:19:50.030300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.400 [2024-11-04 07:19:50.030311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.400 [2024-11-04 07:19:50.030320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.400 [2024-11-04 07:19:50.030332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.400 [2024-11-04 07:19:50.030341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.400 [2024-11-04 07:19:50.030447] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xeb0dc0 was disconnected and freed. reset controller. 00:13:48.400 [2024-11-04 07:19:50.030546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.400 [2024-11-04 07:19:50.030571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.400 [2024-11-04 07:19:50.030583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.400 [2024-11-04 07:19:50.030592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.400 [2024-11-04 07:19:50.030603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.400 [2024-11-04 07:19:50.030612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.400 [2024-11-04 07:19:50.030623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.400 [2024-11-04 07:19:50.030632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.400 [2024-11-04 07:19:50.030642] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0ca70 is same with the state(5) to be set 00:13:48.400 [2024-11-04 07:19:50.031694] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:48.400 07:19:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.400 07:19:50 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:48.400 07:19:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.400 07:19:50 -- common/autotest_common.sh@10 -- # set +x 00:13:48.400 task offset: 82944 on job bdev=Nvme0n1 fails 00:13:48.400 00:13:48.400 Latency(us) 00:13:48.400 [2024-11-04T07:19:50.241Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.400 [2024-11-04T07:19:50.241Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:48.400 [2024-11-04T07:19:50.241Z] Job: Nvme0n1 ended in about 0.73 seconds with error 00:13:48.400 Verification LBA range: start 0x0 length 0x400 00:13:48.400 Nvme0n1 : 0.73 3683.21 230.20 88.09 0.00 16720.53 3366.17 22878.02 00:13:48.400 [2024-11-04T07:19:50.241Z] =================================================================================================================== 00:13:48.400 [2024-11-04T07:19:50.241Z] Total : 3683.21 230.20 88.09 0.00 16720.53 3366.17 22878.02 00:13:48.400 [2024-11-04 07:19:50.033570] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:48.400 [2024-11-04 07:19:50.033594] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0ca70 (9): Bad file descriptor 00:13:48.400 07:19:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.400 07:19:50 -- target/host_management.sh@87 -- # sleep 1 00:13:48.400 [2024-11-04 07:19:50.045508] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:49.335 07:19:51 -- target/host_management.sh@91 -- # kill -9 82694 00:13:49.335 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (82694) - No such process 00:13:49.335 07:19:51 -- target/host_management.sh@91 -- # true 00:13:49.335 07:19:51 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:49.336 07:19:51 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:49.336 07:19:51 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:49.336 07:19:51 -- nvmf/common.sh@520 -- # config=() 00:13:49.336 07:19:51 -- nvmf/common.sh@520 -- # local subsystem config 00:13:49.336 07:19:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:13:49.336 07:19:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:13:49.336 { 00:13:49.336 "params": { 00:13:49.336 "name": "Nvme$subsystem", 00:13:49.336 "trtype": "$TEST_TRANSPORT", 00:13:49.336 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:49.336 "adrfam": "ipv4", 00:13:49.336 "trsvcid": "$NVMF_PORT", 00:13:49.336 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:49.336 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:49.336 "hdgst": ${hdgst:-false}, 00:13:49.336 "ddgst": ${ddgst:-false} 00:13:49.336 }, 00:13:49.336 "method": "bdev_nvme_attach_controller" 00:13:49.336 } 00:13:49.336 EOF 00:13:49.336 )") 00:13:49.336 07:19:51 -- nvmf/common.sh@542 -- # cat 00:13:49.336 07:19:51 -- nvmf/common.sh@544 -- # jq . 00:13:49.336 07:19:51 -- nvmf/common.sh@545 -- # IFS=, 00:13:49.336 07:19:51 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:13:49.336 "params": { 00:13:49.336 "name": "Nvme0", 00:13:49.336 "trtype": "tcp", 00:13:49.336 "traddr": "10.0.0.2", 00:13:49.336 "adrfam": "ipv4", 00:13:49.336 "trsvcid": "4420", 00:13:49.336 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:49.336 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:49.336 "hdgst": false, 00:13:49.336 "ddgst": false 00:13:49.336 }, 00:13:49.336 "method": "bdev_nvme_attach_controller" 00:13:49.336 }' 00:13:49.336 [2024-11-04 07:19:51.103991] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:49.336 [2024-11-04 07:19:51.104082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82747 ] 00:13:49.595 [2024-11-04 07:19:51.246756] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.595 [2024-11-04 07:19:51.299969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.854 Running I/O for 1 seconds... 00:13:50.791 00:13:50.791 Latency(us) 00:13:50.791 [2024-11-04T07:19:52.632Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.791 [2024-11-04T07:19:52.632Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:50.791 Verification LBA range: start 0x0 length 0x400 00:13:50.791 Nvme0n1 : 1.01 3915.00 244.69 0.00 0.00 16080.36 1169.22 23235.49 00:13:50.791 [2024-11-04T07:19:52.632Z] =================================================================================================================== 00:13:50.791 [2024-11-04T07:19:52.632Z] Total : 3915.00 244.69 0.00 0.00 16080.36 1169.22 23235.49 00:13:51.050 07:19:52 -- target/host_management.sh@101 -- # stoptarget 00:13:51.050 07:19:52 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:51.050 07:19:52 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:13:51.050 07:19:52 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:13:51.050 07:19:52 -- target/host_management.sh@40 -- # nvmftestfini 00:13:51.050 07:19:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:51.050 07:19:52 -- nvmf/common.sh@116 -- # sync 00:13:51.050 07:19:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:51.050 07:19:52 -- nvmf/common.sh@119 -- # set +e 00:13:51.050 07:19:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:51.050 07:19:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:51.050 rmmod nvme_tcp 00:13:51.050 rmmod nvme_fabrics 00:13:51.050 rmmod nvme_keyring 00:13:51.050 07:19:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:51.050 07:19:52 -- nvmf/common.sh@123 -- # set -e 00:13:51.050 07:19:52 -- nvmf/common.sh@124 -- # return 0 00:13:51.050 07:19:52 -- nvmf/common.sh@477 -- # '[' -n 82619 ']' 00:13:51.050 07:19:52 -- nvmf/common.sh@478 -- # killprocess 82619 00:13:51.050 07:19:52 -- common/autotest_common.sh@926 -- # '[' -z 82619 ']' 00:13:51.050 07:19:52 -- common/autotest_common.sh@930 -- # kill -0 82619 00:13:51.050 07:19:52 -- common/autotest_common.sh@931 -- # uname 00:13:51.050 07:19:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:51.050 07:19:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82619 00:13:51.050 killing process with pid 82619 00:13:51.050 07:19:52 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:51.050 07:19:52 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:51.050 07:19:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82619' 00:13:51.050 07:19:52 -- common/autotest_common.sh@945 -- # kill 82619 00:13:51.050 07:19:52 -- common/autotest_common.sh@950 -- # wait 82619 00:13:51.309 [2024-11-04 07:19:53.094122] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:51.309 07:19:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:51.309 07:19:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:51.309 07:19:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:51.309 07:19:53 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:51.309 07:19:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:51.309 07:19:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.309 07:19:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:51.309 07:19:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.569 07:19:53 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:51.569 00:13:51.569 real 0m5.556s 00:13:51.569 user 0m23.076s 00:13:51.569 sys 0m1.404s 00:13:51.569 07:19:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:51.569 ************************************ 00:13:51.569 END TEST nvmf_host_management 00:13:51.569 ************************************ 00:13:51.569 07:19:53 -- common/autotest_common.sh@10 -- # set +x 00:13:51.569 07:19:53 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:13:51.569 00:13:51.569 real 0m6.058s 00:13:51.569 user 0m23.193s 00:13:51.569 sys 0m1.657s 00:13:51.569 07:19:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:51.569 ************************************ 00:13:51.569 END TEST nvmf_host_management 00:13:51.569 ************************************ 00:13:51.569 07:19:53 -- common/autotest_common.sh@10 -- # set +x 00:13:51.569 07:19:53 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:51.569 07:19:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:51.569 07:19:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:51.569 07:19:53 -- common/autotest_common.sh@10 -- # set +x 00:13:51.569 ************************************ 00:13:51.569 START TEST nvmf_lvol 00:13:51.569 ************************************ 00:13:51.569 07:19:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:51.569 * Looking for test storage... 00:13:51.569 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:51.569 07:19:53 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:51.569 07:19:53 -- nvmf/common.sh@7 -- # uname -s 00:13:51.569 07:19:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:51.569 07:19:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:51.569 07:19:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:51.569 07:19:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:51.569 07:19:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:51.569 07:19:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:51.569 07:19:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:51.569 07:19:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:51.569 07:19:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:51.569 07:19:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:51.569 07:19:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:13:51.569 07:19:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:13:51.569 07:19:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:51.569 07:19:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:51.569 07:19:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:51.569 07:19:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:51.569 07:19:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:51.569 07:19:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:51.569 07:19:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:51.569 07:19:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.569 07:19:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.569 07:19:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.569 07:19:53 -- paths/export.sh@5 -- # export PATH 00:13:51.569 07:19:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.569 07:19:53 -- nvmf/common.sh@46 -- # : 0 00:13:51.569 07:19:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:51.569 07:19:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:51.569 07:19:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:51.569 07:19:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:51.569 07:19:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:51.569 07:19:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:51.569 07:19:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:51.569 07:19:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:51.569 07:19:53 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:51.569 07:19:53 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:51.569 07:19:53 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:51.569 07:19:53 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:51.569 07:19:53 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:51.569 07:19:53 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:51.569 07:19:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:51.569 07:19:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:51.569 07:19:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:51.570 07:19:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:51.570 07:19:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:51.570 07:19:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.570 07:19:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:51.570 07:19:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.570 07:19:53 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:51.570 07:19:53 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:51.570 07:19:53 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:51.570 07:19:53 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:51.570 07:19:53 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:51.570 07:19:53 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:51.570 07:19:53 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:51.570 07:19:53 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:51.570 07:19:53 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:51.570 07:19:53 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:51.570 07:19:53 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:51.570 07:19:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:51.570 07:19:53 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:51.570 07:19:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:51.570 07:19:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:51.570 07:19:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:51.570 07:19:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:51.570 07:19:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:51.570 07:19:53 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:51.570 07:19:53 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:51.570 Cannot find device "nvmf_tgt_br" 00:13:51.570 07:19:53 -- nvmf/common.sh@154 -- # true 00:13:51.570 07:19:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:51.570 Cannot find device "nvmf_tgt_br2" 00:13:51.570 07:19:53 -- nvmf/common.sh@155 -- # true 00:13:51.570 07:19:53 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:51.570 07:19:53 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:51.829 Cannot find device "nvmf_tgt_br" 00:13:51.829 07:19:53 -- nvmf/common.sh@157 -- # true 00:13:51.829 07:19:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:51.829 Cannot find device "nvmf_tgt_br2" 00:13:51.829 07:19:53 -- nvmf/common.sh@158 -- # true 00:13:51.829 07:19:53 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:51.829 07:19:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:51.829 07:19:53 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:51.829 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:51.829 07:19:53 -- nvmf/common.sh@161 -- # true 00:13:51.829 07:19:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:51.829 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:51.829 07:19:53 -- nvmf/common.sh@162 -- # true 00:13:51.829 07:19:53 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:51.829 07:19:53 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:51.829 07:19:53 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:51.829 07:19:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:51.829 07:19:53 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:51.829 07:19:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:51.829 07:19:53 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:51.829 07:19:53 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:51.829 07:19:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:51.829 07:19:53 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:51.829 07:19:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:51.829 07:19:53 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:51.829 07:19:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:51.829 07:19:53 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:51.829 07:19:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:51.829 07:19:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:51.829 07:19:53 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:51.829 07:19:53 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:51.829 07:19:53 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:51.829 07:19:53 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:51.829 07:19:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:51.829 07:19:53 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:51.829 07:19:53 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:51.829 07:19:53 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:51.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:51.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:13:51.829 00:13:51.829 --- 10.0.0.2 ping statistics --- 00:13:51.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.830 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:13:51.830 07:19:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:51.830 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:51.830 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:13:51.830 00:13:51.830 --- 10.0.0.3 ping statistics --- 00:13:51.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.830 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:13:51.830 07:19:53 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:52.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:52.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:13:52.089 00:13:52.089 --- 10.0.0.1 ping statistics --- 00:13:52.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.089 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:13:52.089 07:19:53 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:52.089 07:19:53 -- nvmf/common.sh@421 -- # return 0 00:13:52.089 07:19:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:52.089 07:19:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:52.089 07:19:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:52.089 07:19:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:52.089 07:19:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:52.089 07:19:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:52.089 07:19:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:52.089 07:19:53 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:52.089 07:19:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:52.089 07:19:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:52.089 07:19:53 -- common/autotest_common.sh@10 -- # set +x 00:13:52.089 07:19:53 -- nvmf/common.sh@469 -- # nvmfpid=82969 00:13:52.089 07:19:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:52.089 07:19:53 -- nvmf/common.sh@470 -- # waitforlisten 82969 00:13:52.089 07:19:53 -- common/autotest_common.sh@819 -- # '[' -z 82969 ']' 00:13:52.089 07:19:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.089 07:19:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:52.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.089 07:19:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.089 07:19:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:52.089 07:19:53 -- common/autotest_common.sh@10 -- # set +x 00:13:52.089 [2024-11-04 07:19:53.753719] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:52.089 [2024-11-04 07:19:53.753804] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.089 [2024-11-04 07:19:53.896383] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:52.348 [2024-11-04 07:19:53.970039] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:52.348 [2024-11-04 07:19:53.970221] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:52.348 [2024-11-04 07:19:53.970238] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:52.348 [2024-11-04 07:19:53.970251] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:52.348 [2024-11-04 07:19:53.970424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.348 [2024-11-04 07:19:53.970531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:52.348 [2024-11-04 07:19:53.970542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.284 07:19:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:53.284 07:19:54 -- common/autotest_common.sh@852 -- # return 0 00:13:53.284 07:19:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:53.285 07:19:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:53.285 07:19:54 -- common/autotest_common.sh@10 -- # set +x 00:13:53.285 07:19:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.285 07:19:54 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:53.285 [2024-11-04 07:19:55.084928] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:53.285 07:19:55 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:53.852 07:19:55 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:53.852 07:19:55 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:53.852 07:19:55 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:53.852 07:19:55 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:54.111 07:19:55 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:54.369 07:19:56 -- target/nvmf_lvol.sh@29 -- # lvs=db7756f7-66e8-406f-b62c-ec254ebc08f6 00:13:54.369 07:19:56 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u db7756f7-66e8-406f-b62c-ec254ebc08f6 lvol 20 00:13:54.627 07:19:56 -- target/nvmf_lvol.sh@32 -- # lvol=3be5ccb0-9511-46c5-9661-b545ebae8e9d 00:13:54.627 07:19:56 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:54.885 07:19:56 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3be5ccb0-9511-46c5-9661-b545ebae8e9d 00:13:55.143 07:19:56 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:55.402 [2024-11-04 07:19:57.099634] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:55.402 07:19:57 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:55.661 07:19:57 -- target/nvmf_lvol.sh@42 -- # perf_pid=83118 00:13:55.661 07:19:57 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:55.661 07:19:57 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:56.597 07:19:58 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 3be5ccb0-9511-46c5-9661-b545ebae8e9d MY_SNAPSHOT 00:13:56.856 07:19:58 -- target/nvmf_lvol.sh@47 -- # snapshot=12cb4ea5-5726-4da3-bd98-fd4f9710a46d 00:13:56.856 07:19:58 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 3be5ccb0-9511-46c5-9661-b545ebae8e9d 30 00:13:57.424 07:19:58 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 12cb4ea5-5726-4da3-bd98-fd4f9710a46d MY_CLONE 00:13:57.684 07:19:59 -- target/nvmf_lvol.sh@49 -- # clone=95d3670b-8e83-4d41-b0d7-90bed180e1b4 00:13:57.684 07:19:59 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 95d3670b-8e83-4d41-b0d7-90bed180e1b4 00:13:58.621 07:20:00 -- target/nvmf_lvol.sh@53 -- # wait 83118 00:14:06.779 Initializing NVMe Controllers 00:14:06.779 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:06.779 Controller IO queue size 128, less than required. 00:14:06.779 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:06.779 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:06.779 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:06.779 Initialization complete. Launching workers. 00:14:06.779 ======================================================== 00:14:06.779 Latency(us) 00:14:06.779 Device Information : IOPS MiB/s Average min max 00:14:06.779 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8356.90 32.64 15329.66 2022.28 102198.71 00:14:06.779 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 7030.90 27.46 18207.65 3192.26 80385.76 00:14:06.779 ======================================================== 00:14:06.779 Total : 15387.80 60.11 16644.65 2022.28 102198.71 00:14:06.779 00:14:06.779 07:20:07 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:06.779 07:20:07 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 3be5ccb0-9511-46c5-9661-b545ebae8e9d 00:14:06.779 07:20:08 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u db7756f7-66e8-406f-b62c-ec254ebc08f6 00:14:06.779 07:20:08 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:06.779 07:20:08 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:06.779 07:20:08 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:06.779 07:20:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:06.779 07:20:08 -- nvmf/common.sh@116 -- # sync 00:14:06.779 07:20:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:06.779 07:20:08 -- nvmf/common.sh@119 -- # set +e 00:14:06.779 07:20:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:06.779 07:20:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:06.779 rmmod nvme_tcp 00:14:06.779 rmmod nvme_fabrics 00:14:06.779 rmmod nvme_keyring 00:14:06.779 07:20:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:06.779 07:20:08 -- nvmf/common.sh@123 -- # set -e 00:14:06.779 07:20:08 -- nvmf/common.sh@124 -- # return 0 00:14:06.779 07:20:08 -- nvmf/common.sh@477 -- # '[' -n 82969 ']' 00:14:06.779 07:20:08 -- nvmf/common.sh@478 -- # killprocess 82969 00:14:06.779 07:20:08 -- common/autotest_common.sh@926 -- # '[' -z 82969 ']' 00:14:06.779 07:20:08 -- common/autotest_common.sh@930 -- # kill -0 82969 00:14:06.779 07:20:08 -- common/autotest_common.sh@931 -- # uname 00:14:06.779 07:20:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:06.779 07:20:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82969 00:14:06.779 07:20:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:06.779 07:20:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:06.779 killing process with pid 82969 00:14:06.779 07:20:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82969' 00:14:06.779 07:20:08 -- common/autotest_common.sh@945 -- # kill 82969 00:14:06.779 07:20:08 -- common/autotest_common.sh@950 -- # wait 82969 00:14:07.038 07:20:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:07.038 07:20:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:07.038 07:20:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:07.038 07:20:08 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:07.038 07:20:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:07.038 07:20:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.038 07:20:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.038 07:20:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.038 07:20:08 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:07.038 ************************************ 00:14:07.038 END TEST nvmf_lvol 00:14:07.038 ************************************ 00:14:07.038 00:14:07.038 real 0m15.513s 00:14:07.038 user 1m5.269s 00:14:07.038 sys 0m3.725s 00:14:07.038 07:20:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:07.039 07:20:08 -- common/autotest_common.sh@10 -- # set +x 00:14:07.039 07:20:08 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:07.039 07:20:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:07.039 07:20:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:07.039 07:20:08 -- common/autotest_common.sh@10 -- # set +x 00:14:07.039 ************************************ 00:14:07.039 START TEST nvmf_lvs_grow 00:14:07.039 ************************************ 00:14:07.039 07:20:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:07.298 * Looking for test storage... 00:14:07.298 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:07.298 07:20:08 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:07.298 07:20:08 -- nvmf/common.sh@7 -- # uname -s 00:14:07.298 07:20:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:07.298 07:20:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:07.298 07:20:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:07.298 07:20:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:07.298 07:20:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.298 07:20:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.298 07:20:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.298 07:20:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:07.298 07:20:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:07.298 07:20:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:07.298 07:20:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:14:07.298 07:20:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:14:07.298 07:20:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:07.298 07:20:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:07.298 07:20:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:07.298 07:20:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:07.298 07:20:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:07.298 07:20:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:07.298 07:20:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:07.298 07:20:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.298 07:20:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.298 07:20:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.298 07:20:08 -- paths/export.sh@5 -- # export PATH 00:14:07.299 07:20:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.299 07:20:08 -- nvmf/common.sh@46 -- # : 0 00:14:07.299 07:20:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:07.299 07:20:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:07.299 07:20:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:07.299 07:20:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:07.299 07:20:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:07.299 07:20:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:07.299 07:20:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:07.299 07:20:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:07.299 07:20:08 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:07.299 07:20:08 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:07.299 07:20:08 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:14:07.299 07:20:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:07.299 07:20:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:07.299 07:20:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:07.299 07:20:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:07.299 07:20:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:07.299 07:20:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.299 07:20:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.299 07:20:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.299 07:20:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:07.299 07:20:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:07.299 07:20:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:07.299 07:20:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:07.299 07:20:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:07.299 07:20:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:07.299 07:20:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:07.299 07:20:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:07.299 07:20:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:07.299 07:20:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:07.299 07:20:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:07.299 07:20:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:07.299 07:20:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:07.299 07:20:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:07.299 07:20:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:07.299 07:20:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:07.299 07:20:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:07.299 07:20:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:07.299 07:20:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:07.299 07:20:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:07.299 Cannot find device "nvmf_tgt_br" 00:14:07.299 07:20:08 -- nvmf/common.sh@154 -- # true 00:14:07.299 07:20:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:07.299 Cannot find device "nvmf_tgt_br2" 00:14:07.299 07:20:08 -- nvmf/common.sh@155 -- # true 00:14:07.299 07:20:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:07.299 07:20:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:07.299 Cannot find device "nvmf_tgt_br" 00:14:07.299 07:20:08 -- nvmf/common.sh@157 -- # true 00:14:07.299 07:20:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:07.299 Cannot find device "nvmf_tgt_br2" 00:14:07.299 07:20:09 -- nvmf/common.sh@158 -- # true 00:14:07.299 07:20:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:07.299 07:20:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:07.299 07:20:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:07.299 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:07.299 07:20:09 -- nvmf/common.sh@161 -- # true 00:14:07.299 07:20:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:07.299 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:07.299 07:20:09 -- nvmf/common.sh@162 -- # true 00:14:07.299 07:20:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:07.299 07:20:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:07.299 07:20:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:07.299 07:20:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:07.299 07:20:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:07.299 07:20:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:07.299 07:20:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:07.299 07:20:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:07.299 07:20:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:07.558 07:20:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:07.558 07:20:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:07.558 07:20:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:07.558 07:20:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:07.558 07:20:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:07.558 07:20:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:07.558 07:20:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:07.558 07:20:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:07.558 07:20:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:07.558 07:20:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:07.558 07:20:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:07.558 07:20:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:07.558 07:20:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:07.558 07:20:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:07.558 07:20:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:07.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:07.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:14:07.558 00:14:07.558 --- 10.0.0.2 ping statistics --- 00:14:07.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.558 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:14:07.558 07:20:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:07.558 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:07.558 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:14:07.558 00:14:07.558 --- 10.0.0.3 ping statistics --- 00:14:07.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.558 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:14:07.558 07:20:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:07.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:07.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:14:07.558 00:14:07.558 --- 10.0.0.1 ping statistics --- 00:14:07.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.558 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:14:07.558 07:20:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:07.558 07:20:09 -- nvmf/common.sh@421 -- # return 0 00:14:07.558 07:20:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:07.558 07:20:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:07.558 07:20:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:07.558 07:20:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:07.558 07:20:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:07.558 07:20:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:07.558 07:20:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:07.558 07:20:09 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:07.558 07:20:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:07.558 07:20:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:07.558 07:20:09 -- common/autotest_common.sh@10 -- # set +x 00:14:07.558 07:20:09 -- nvmf/common.sh@469 -- # nvmfpid=83473 00:14:07.558 07:20:09 -- nvmf/common.sh@470 -- # waitforlisten 83473 00:14:07.558 07:20:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:07.558 07:20:09 -- common/autotest_common.sh@819 -- # '[' -z 83473 ']' 00:14:07.558 07:20:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.558 07:20:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:07.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.558 07:20:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.558 07:20:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:07.558 07:20:09 -- common/autotest_common.sh@10 -- # set +x 00:14:07.558 [2024-11-04 07:20:09.302937] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:07.558 [2024-11-04 07:20:09.303427] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:07.818 [2024-11-04 07:20:09.435741] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.818 [2024-11-04 07:20:09.494120] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:07.818 [2024-11-04 07:20:09.494251] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:07.818 [2024-11-04 07:20:09.494271] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:07.818 [2024-11-04 07:20:09.494279] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:07.818 [2024-11-04 07:20:09.494307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.755 07:20:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:08.755 07:20:10 -- common/autotest_common.sh@852 -- # return 0 00:14:08.755 07:20:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:08.755 07:20:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:08.755 07:20:10 -- common/autotest_common.sh@10 -- # set +x 00:14:08.755 07:20:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:08.755 07:20:10 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:09.013 [2024-11-04 07:20:10.670202] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:09.013 07:20:10 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:09.013 07:20:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:09.013 07:20:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:09.013 07:20:10 -- common/autotest_common.sh@10 -- # set +x 00:14:09.013 ************************************ 00:14:09.013 START TEST lvs_grow_clean 00:14:09.013 ************************************ 00:14:09.013 07:20:10 -- common/autotest_common.sh@1104 -- # lvs_grow 00:14:09.013 07:20:10 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:09.013 07:20:10 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:09.013 07:20:10 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:09.013 07:20:10 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:09.013 07:20:10 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:09.013 07:20:10 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:09.013 07:20:10 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:09.013 07:20:10 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:09.013 07:20:10 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:09.272 07:20:11 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:09.273 07:20:11 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:09.531 07:20:11 -- target/nvmf_lvs_grow.sh@28 -- # lvs=0f08e919-0b71-487c-9ba9-2813d7a05c74 00:14:09.531 07:20:11 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:09.531 07:20:11 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f08e919-0b71-487c-9ba9-2813d7a05c74 00:14:09.790 07:20:11 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:09.790 07:20:11 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:09.790 07:20:11 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0f08e919-0b71-487c-9ba9-2813d7a05c74 lvol 150 00:14:10.049 07:20:11 -- target/nvmf_lvs_grow.sh@33 -- # lvol=f15313a9-0c2e-4b2d-a72e-f98dcc0f8356 00:14:10.049 07:20:11 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:10.049 07:20:11 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:10.308 [2024-11-04 07:20:12.068881] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:10.308 [2024-11-04 07:20:12.068966] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:10.308 true 00:14:10.308 07:20:12 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:10.308 07:20:12 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f08e919-0b71-487c-9ba9-2813d7a05c74 00:14:10.566 07:20:12 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:10.566 07:20:12 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:10.825 07:20:12 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f15313a9-0c2e-4b2d-a72e-f98dcc0f8356 00:14:11.084 07:20:12 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:11.343 [2024-11-04 07:20:13.053454] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.343 07:20:13 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:11.602 07:20:13 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=83641 00:14:11.602 07:20:13 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:11.602 07:20:13 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:11.602 07:20:13 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 83641 /var/tmp/bdevperf.sock 00:14:11.602 07:20:13 -- common/autotest_common.sh@819 -- # '[' -z 83641 ']' 00:14:11.602 07:20:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:11.602 07:20:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:11.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:11.602 07:20:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:11.602 07:20:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:11.602 07:20:13 -- common/autotest_common.sh@10 -- # set +x 00:14:11.602 [2024-11-04 07:20:13.318822] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:11.602 [2024-11-04 07:20:13.318924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83641 ] 00:14:11.861 [2024-11-04 07:20:13.455983] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.861 [2024-11-04 07:20:13.527441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.796 07:20:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:12.796 07:20:14 -- common/autotest_common.sh@852 -- # return 0 00:14:12.796 07:20:14 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:12.796 Nvme0n1 00:14:12.796 07:20:14 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:13.055 [ 00:14:13.055 { 00:14:13.055 "aliases": [ 00:14:13.055 "f15313a9-0c2e-4b2d-a72e-f98dcc0f8356" 00:14:13.055 ], 00:14:13.055 "assigned_rate_limits": { 00:14:13.055 "r_mbytes_per_sec": 0, 00:14:13.055 "rw_ios_per_sec": 0, 00:14:13.055 "rw_mbytes_per_sec": 0, 00:14:13.055 "w_mbytes_per_sec": 0 00:14:13.055 }, 00:14:13.055 "block_size": 4096, 00:14:13.055 "claimed": false, 00:14:13.055 "driver_specific": { 00:14:13.055 "mp_policy": "active_passive", 00:14:13.055 "nvme": [ 00:14:13.055 { 00:14:13.055 "ctrlr_data": { 00:14:13.055 "ana_reporting": false, 00:14:13.055 "cntlid": 1, 00:14:13.055 "firmware_revision": "24.01.1", 00:14:13.055 "model_number": "SPDK bdev Controller", 00:14:13.055 "multi_ctrlr": true, 00:14:13.055 "oacs": { 00:14:13.055 "firmware": 0, 00:14:13.055 "format": 0, 00:14:13.055 "ns_manage": 0, 00:14:13.055 "security": 0 00:14:13.055 }, 00:14:13.055 "serial_number": "SPDK0", 00:14:13.055 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:13.055 "vendor_id": "0x8086" 00:14:13.055 }, 00:14:13.055 "ns_data": { 00:14:13.055 "can_share": true, 00:14:13.056 "id": 1 00:14:13.056 }, 00:14:13.056 "trid": { 00:14:13.056 "adrfam": "IPv4", 00:14:13.056 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:13.056 "traddr": "10.0.0.2", 00:14:13.056 "trsvcid": "4420", 00:14:13.056 "trtype": "TCP" 00:14:13.056 }, 00:14:13.056 "vs": { 00:14:13.056 "nvme_version": "1.3" 00:14:13.056 } 00:14:13.056 } 00:14:13.056 ] 00:14:13.056 }, 00:14:13.056 "name": "Nvme0n1", 00:14:13.056 "num_blocks": 38912, 00:14:13.056 "product_name": "NVMe disk", 00:14:13.056 "supported_io_types": { 00:14:13.056 "abort": true, 00:14:13.056 "compare": true, 00:14:13.056 "compare_and_write": true, 00:14:13.056 "flush": true, 00:14:13.056 "nvme_admin": true, 00:14:13.056 "nvme_io": true, 00:14:13.056 "read": true, 00:14:13.056 "reset": true, 00:14:13.056 "unmap": true, 00:14:13.056 "write": true, 00:14:13.056 "write_zeroes": true 00:14:13.056 }, 00:14:13.056 "uuid": "f15313a9-0c2e-4b2d-a72e-f98dcc0f8356", 00:14:13.056 "zoned": false 00:14:13.056 } 00:14:13.056 ] 00:14:13.056 07:20:14 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=83689 00:14:13.056 07:20:14 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:13.056 07:20:14 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:13.314 Running I/O for 10 seconds... 00:14:14.251 Latency(us) 00:14:14.251 [2024-11-04T07:20:16.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.251 [2024-11-04T07:20:16.092Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:14.251 Nvme0n1 : 1.00 7491.00 29.26 0.00 0.00 0.00 0.00 0.00 00:14:14.251 [2024-11-04T07:20:16.092Z] =================================================================================================================== 00:14:14.251 [2024-11-04T07:20:16.092Z] Total : 7491.00 29.26 0.00 0.00 0.00 0.00 0.00 00:14:14.251 00:14:15.185 07:20:16 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0f08e919-0b71-487c-9ba9-2813d7a05c74 00:14:15.185 [2024-11-04T07:20:17.026Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:15.185 Nvme0n1 : 2.00 7461.00 29.14 0.00 0.00 0.00 0.00 0.00 00:14:15.185 [2024-11-04T07:20:17.026Z] =================================================================================================================== 00:14:15.185 [2024-11-04T07:20:17.026Z] Total : 7461.00 29.14 0.00 0.00 0.00 0.00 0.00 00:14:15.185 00:14:15.444 true 00:14:15.444 07:20:17 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f08e919-0b71-487c-9ba9-2813d7a05c74 00:14:15.444 07:20:17 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:15.703 07:20:17 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:15.703 07:20:17 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:15.703 07:20:17 -- target/nvmf_lvs_grow.sh@65 -- # wait 83689 00:14:16.271 [2024-11-04T07:20:18.112Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:16.271 Nvme0n1 : 3.00 7363.33 28.76 0.00 0.00 0.00 0.00 0.00 00:14:16.271 [2024-11-04T07:20:18.112Z] =================================================================================================================== 00:14:16.271 [2024-11-04T07:20:18.112Z] Total : 7363.33 28.76 0.00 0.00 0.00 0.00 0.00 00:14:16.271 00:14:17.205 [2024-11-04T07:20:19.047Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:17.206 Nvme0n1 : 4.00 7327.50 28.62 0.00 0.00 0.00 0.00 0.00 00:14:17.206 [2024-11-04T07:20:19.047Z] =================================================================================================================== 00:14:17.206 [2024-11-04T07:20:19.047Z] Total : 7327.50 28.62 0.00 0.00 0.00 0.00 0.00 00:14:17.206 00:14:18.142 [2024-11-04T07:20:19.983Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:18.142 Nvme0n1 : 5.00 7281.40 28.44 0.00 0.00 0.00 0.00 0.00 00:14:18.142 [2024-11-04T07:20:19.983Z] =================================================================================================================== 00:14:18.142 [2024-11-04T07:20:19.983Z] Total : 7281.40 28.44 0.00 0.00 0.00 0.00 0.00 00:14:18.142 00:14:19.077 [2024-11-04T07:20:20.918Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:19.077 Nvme0n1 : 6.00 7251.17 28.32 0.00 0.00 0.00 0.00 0.00 00:14:19.077 [2024-11-04T07:20:20.918Z] =================================================================================================================== 00:14:19.077 [2024-11-04T07:20:20.918Z] Total : 7251.17 28.32 0.00 0.00 0.00 0.00 0.00 00:14:19.077 00:14:20.453 [2024-11-04T07:20:22.294Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:20.453 Nvme0n1 : 7.00 7229.57 28.24 0.00 0.00 0.00 0.00 0.00 00:14:20.453 [2024-11-04T07:20:22.294Z] =================================================================================================================== 00:14:20.453 [2024-11-04T07:20:22.294Z] Total : 7229.57 28.24 0.00 0.00 0.00 0.00 0.00 00:14:20.453 00:14:21.402 [2024-11-04T07:20:23.243Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:21.402 Nvme0n1 : 8.00 7122.38 27.82 0.00 0.00 0.00 0.00 0.00 00:14:21.402 [2024-11-04T07:20:23.243Z] =================================================================================================================== 00:14:21.402 [2024-11-04T07:20:23.243Z] Total : 7122.38 27.82 0.00 0.00 0.00 0.00 0.00 00:14:21.402 00:14:22.349 [2024-11-04T07:20:24.190Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:22.349 Nvme0n1 : 9.00 7098.89 27.73 0.00 0.00 0.00 0.00 0.00 00:14:22.349 [2024-11-04T07:20:24.190Z] =================================================================================================================== 00:14:22.349 [2024-11-04T07:20:24.190Z] Total : 7098.89 27.73 0.00 0.00 0.00 0.00 0.00 00:14:22.349 00:14:23.285 [2024-11-04T07:20:25.126Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:23.285 Nvme0n1 : 10.00 7074.10 27.63 0.00 0.00 0.00 0.00 0.00 00:14:23.285 [2024-11-04T07:20:25.126Z] =================================================================================================================== 00:14:23.285 [2024-11-04T07:20:25.126Z] Total : 7074.10 27.63 0.00 0.00 0.00 0.00 0.00 00:14:23.285 00:14:23.285 00:14:23.285 Latency(us) 00:14:23.285 [2024-11-04T07:20:25.126Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.285 [2024-11-04T07:20:25.126Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:23.285 Nvme0n1 : 10.02 7074.70 27.64 0.00 0.00 18081.25 5004.57 133455.13 00:14:23.285 [2024-11-04T07:20:25.126Z] =================================================================================================================== 00:14:23.285 [2024-11-04T07:20:25.126Z] Total : 7074.70 27.64 0.00 0.00 18081.25 5004.57 133455.13 00:14:23.285 0 00:14:23.285 07:20:24 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 83641 00:14:23.285 07:20:24 -- common/autotest_common.sh@926 -- # '[' -z 83641 ']' 00:14:23.285 07:20:24 -- common/autotest_common.sh@930 -- # kill -0 83641 00:14:23.285 07:20:24 -- common/autotest_common.sh@931 -- # uname 00:14:23.285 07:20:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:23.285 07:20:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83641 00:14:23.285 killing process with pid 83641 00:14:23.285 Received shutdown signal, test time was about 10.000000 seconds 00:14:23.285 00:14:23.285 Latency(us) 00:14:23.285 [2024-11-04T07:20:25.126Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.285 [2024-11-04T07:20:25.126Z] =================================================================================================================== 00:14:23.285 [2024-11-04T07:20:25.126Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:23.285 07:20:24 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:23.285 07:20:24 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:23.285 07:20:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83641' 00:14:23.285 07:20:24 -- common/autotest_common.sh@945 -- # kill 83641 00:14:23.285 07:20:24 -- common/autotest_common.sh@950 -- # wait 83641 00:14:23.543 07:20:25 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:23.802 07:20:25 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f08e919-0b71-487c-9ba9-2813d7a05c74 00:14:23.802 07:20:25 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:24.061 07:20:25 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:24.061 07:20:25 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:14:24.061 07:20:25 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:24.319 [2024-11-04 07:20:25.928586] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:24.319 07:20:25 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f08e919-0b71-487c-9ba9-2813d7a05c74 00:14:24.319 07:20:25 -- common/autotest_common.sh@640 -- # local es=0 00:14:24.319 07:20:25 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f08e919-0b71-487c-9ba9-2813d7a05c74 00:14:24.319 07:20:25 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:24.319 07:20:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:24.319 07:20:25 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:24.319 07:20:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:24.319 07:20:25 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:24.319 07:20:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:24.319 07:20:25 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:24.319 07:20:25 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:24.319 07:20:25 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f08e919-0b71-487c-9ba9-2813d7a05c74 00:14:24.578 2024/11/04 07:20:26 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:0f08e919-0b71-487c-9ba9-2813d7a05c74], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:24.578 request: 00:14:24.578 { 00:14:24.578 "method": "bdev_lvol_get_lvstores", 00:14:24.578 "params": { 00:14:24.578 "uuid": "0f08e919-0b71-487c-9ba9-2813d7a05c74" 00:14:24.578 } 00:14:24.578 } 00:14:24.578 Got JSON-RPC error response 00:14:24.578 GoRPCClient: error on JSON-RPC call 00:14:24.578 07:20:26 -- common/autotest_common.sh@643 -- # es=1 00:14:24.578 07:20:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:24.578 07:20:26 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:24.578 07:20:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:24.578 07:20:26 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:24.838 aio_bdev 00:14:24.838 07:20:26 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev f15313a9-0c2e-4b2d-a72e-f98dcc0f8356 00:14:24.838 07:20:26 -- common/autotest_common.sh@887 -- # local bdev_name=f15313a9-0c2e-4b2d-a72e-f98dcc0f8356 00:14:24.838 07:20:26 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:24.838 07:20:26 -- common/autotest_common.sh@889 -- # local i 00:14:24.838 07:20:26 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:24.838 07:20:26 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:24.838 07:20:26 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:25.096 07:20:26 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f15313a9-0c2e-4b2d-a72e-f98dcc0f8356 -t 2000 00:14:25.096 [ 00:14:25.096 { 00:14:25.096 "aliases": [ 00:14:25.096 "lvs/lvol" 00:14:25.096 ], 00:14:25.096 "assigned_rate_limits": { 00:14:25.096 "r_mbytes_per_sec": 0, 00:14:25.096 "rw_ios_per_sec": 0, 00:14:25.096 "rw_mbytes_per_sec": 0, 00:14:25.096 "w_mbytes_per_sec": 0 00:14:25.096 }, 00:14:25.096 "block_size": 4096, 00:14:25.096 "claimed": false, 00:14:25.096 "driver_specific": { 00:14:25.096 "lvol": { 00:14:25.096 "base_bdev": "aio_bdev", 00:14:25.096 "clone": false, 00:14:25.096 "esnap_clone": false, 00:14:25.097 "lvol_store_uuid": "0f08e919-0b71-487c-9ba9-2813d7a05c74", 00:14:25.097 "snapshot": false, 00:14:25.097 "thin_provision": false 00:14:25.097 } 00:14:25.097 }, 00:14:25.097 "name": "f15313a9-0c2e-4b2d-a72e-f98dcc0f8356", 00:14:25.097 "num_blocks": 38912, 00:14:25.097 "product_name": "Logical Volume", 00:14:25.097 "supported_io_types": { 00:14:25.097 "abort": false, 00:14:25.097 "compare": false, 00:14:25.097 "compare_and_write": false, 00:14:25.097 "flush": false, 00:14:25.097 "nvme_admin": false, 00:14:25.097 "nvme_io": false, 00:14:25.097 "read": true, 00:14:25.097 "reset": true, 00:14:25.097 "unmap": true, 00:14:25.097 "write": true, 00:14:25.097 "write_zeroes": true 00:14:25.097 }, 00:14:25.097 "uuid": "f15313a9-0c2e-4b2d-a72e-f98dcc0f8356", 00:14:25.097 "zoned": false 00:14:25.097 } 00:14:25.097 ] 00:14:25.355 07:20:26 -- common/autotest_common.sh@895 -- # return 0 00:14:25.355 07:20:26 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f08e919-0b71-487c-9ba9-2813d7a05c74 00:14:25.355 07:20:26 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:25.615 07:20:27 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:25.615 07:20:27 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f08e919-0b71-487c-9ba9-2813d7a05c74 00:14:25.615 07:20:27 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:25.615 07:20:27 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:25.615 07:20:27 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f15313a9-0c2e-4b2d-a72e-f98dcc0f8356 00:14:26.182 07:20:27 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0f08e919-0b71-487c-9ba9-2813d7a05c74 00:14:26.182 07:20:27 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:26.446 07:20:28 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:27.016 00:14:27.016 real 0m17.931s 00:14:27.016 user 0m17.261s 00:14:27.016 sys 0m2.165s 00:14:27.016 07:20:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:27.016 07:20:28 -- common/autotest_common.sh@10 -- # set +x 00:14:27.016 ************************************ 00:14:27.016 END TEST lvs_grow_clean 00:14:27.016 ************************************ 00:14:27.016 07:20:28 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:27.016 07:20:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:27.016 07:20:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:27.016 07:20:28 -- common/autotest_common.sh@10 -- # set +x 00:14:27.016 ************************************ 00:14:27.016 START TEST lvs_grow_dirty 00:14:27.016 ************************************ 00:14:27.016 07:20:28 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:14:27.016 07:20:28 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:27.016 07:20:28 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:27.016 07:20:28 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:27.016 07:20:28 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:27.016 07:20:28 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:27.016 07:20:28 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:27.016 07:20:28 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:27.016 07:20:28 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:27.016 07:20:28 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:27.274 07:20:29 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:27.274 07:20:29 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:27.532 07:20:29 -- target/nvmf_lvs_grow.sh@28 -- # lvs=e2dc7c66-4a2d-46cd-afb9-7a1ed5016a7a 00:14:27.532 07:20:29 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2dc7c66-4a2d-46cd-afb9-7a1ed5016a7a 00:14:27.532 07:20:29 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:27.790 07:20:29 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:27.790 07:20:29 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:27.790 07:20:29 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e2dc7c66-4a2d-46cd-afb9-7a1ed5016a7a lvol 150 00:14:28.050 07:20:29 -- target/nvmf_lvs_grow.sh@33 -- # lvol=8e8006ef-ea81-4e0e-8526-c5c15fd24711 00:14:28.050 07:20:29 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:28.050 07:20:29 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:28.308 [2024-11-04 07:20:29.994664] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:28.308 [2024-11-04 07:20:29.994730] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:28.308 true 00:14:28.308 07:20:30 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2dc7c66-4a2d-46cd-afb9-7a1ed5016a7a 00:14:28.308 07:20:30 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:28.566 07:20:30 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:28.566 07:20:30 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:28.824 07:20:30 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8e8006ef-ea81-4e0e-8526-c5c15fd24711 00:14:28.824 07:20:30 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:29.082 07:20:30 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:29.341 07:20:31 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=84079 00:14:29.341 07:20:31 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:29.341 07:20:31 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:29.341 07:20:31 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 84079 /var/tmp/bdevperf.sock 00:14:29.341 07:20:31 -- common/autotest_common.sh@819 -- # '[' -z 84079 ']' 00:14:29.341 07:20:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:29.341 07:20:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:29.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:29.341 07:20:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:29.341 07:20:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:29.341 07:20:31 -- common/autotest_common.sh@10 -- # set +x 00:14:29.341 [2024-11-04 07:20:31.155853] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:29.341 [2024-11-04 07:20:31.155956] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84079 ] 00:14:29.600 [2024-11-04 07:20:31.297839] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.600 [2024-11-04 07:20:31.375448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.533 07:20:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:30.533 07:20:32 -- common/autotest_common.sh@852 -- # return 0 00:14:30.533 07:20:32 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:30.792 Nvme0n1 00:14:30.792 07:20:32 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:31.051 [ 00:14:31.051 { 00:14:31.051 "aliases": [ 00:14:31.051 "8e8006ef-ea81-4e0e-8526-c5c15fd24711" 00:14:31.051 ], 00:14:31.051 "assigned_rate_limits": { 00:14:31.051 "r_mbytes_per_sec": 0, 00:14:31.051 "rw_ios_per_sec": 0, 00:14:31.051 "rw_mbytes_per_sec": 0, 00:14:31.051 "w_mbytes_per_sec": 0 00:14:31.051 }, 00:14:31.051 "block_size": 4096, 00:14:31.051 "claimed": false, 00:14:31.051 "driver_specific": { 00:14:31.051 "mp_policy": "active_passive", 00:14:31.051 "nvme": [ 00:14:31.051 { 00:14:31.051 "ctrlr_data": { 00:14:31.051 "ana_reporting": false, 00:14:31.051 "cntlid": 1, 00:14:31.051 "firmware_revision": "24.01.1", 00:14:31.051 "model_number": "SPDK bdev Controller", 00:14:31.051 "multi_ctrlr": true, 00:14:31.051 "oacs": { 00:14:31.051 "firmware": 0, 00:14:31.051 "format": 0, 00:14:31.051 "ns_manage": 0, 00:14:31.051 "security": 0 00:14:31.051 }, 00:14:31.051 "serial_number": "SPDK0", 00:14:31.051 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:31.051 "vendor_id": "0x8086" 00:14:31.051 }, 00:14:31.051 "ns_data": { 00:14:31.051 "can_share": true, 00:14:31.051 "id": 1 00:14:31.051 }, 00:14:31.051 "trid": { 00:14:31.051 "adrfam": "IPv4", 00:14:31.051 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:31.051 "traddr": "10.0.0.2", 00:14:31.051 "trsvcid": "4420", 00:14:31.051 "trtype": "TCP" 00:14:31.051 }, 00:14:31.051 "vs": { 00:14:31.051 "nvme_version": "1.3" 00:14:31.051 } 00:14:31.051 } 00:14:31.051 ] 00:14:31.051 }, 00:14:31.051 "name": "Nvme0n1", 00:14:31.051 "num_blocks": 38912, 00:14:31.051 "product_name": "NVMe disk", 00:14:31.051 "supported_io_types": { 00:14:31.051 "abort": true, 00:14:31.051 "compare": true, 00:14:31.051 "compare_and_write": true, 00:14:31.051 "flush": true, 00:14:31.051 "nvme_admin": true, 00:14:31.051 "nvme_io": true, 00:14:31.051 "read": true, 00:14:31.051 "reset": true, 00:14:31.051 "unmap": true, 00:14:31.051 "write": true, 00:14:31.051 "write_zeroes": true 00:14:31.051 }, 00:14:31.051 "uuid": "8e8006ef-ea81-4e0e-8526-c5c15fd24711", 00:14:31.051 "zoned": false 00:14:31.051 } 00:14:31.051 ] 00:14:31.051 07:20:32 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=84121 00:14:31.051 07:20:32 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:31.051 07:20:32 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:31.051 Running I/O for 10 seconds... 00:14:31.989 Latency(us) 00:14:31.989 [2024-11-04T07:20:33.830Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.989 [2024-11-04T07:20:33.830Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:31.989 Nvme0n1 : 1.00 7406.00 28.93 0.00 0.00 0.00 0.00 0.00 00:14:31.989 [2024-11-04T07:20:33.830Z] =================================================================================================================== 00:14:31.989 [2024-11-04T07:20:33.830Z] Total : 7406.00 28.93 0.00 0.00 0.00 0.00 0.00 00:14:31.989 00:14:32.924 07:20:34 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e2dc7c66-4a2d-46cd-afb9-7a1ed5016a7a 00:14:33.182 [2024-11-04T07:20:35.023Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:33.182 Nvme0n1 : 2.00 7480.00 29.22 0.00 0.00 0.00 0.00 0.00 00:14:33.182 [2024-11-04T07:20:35.023Z] =================================================================================================================== 00:14:33.182 [2024-11-04T07:20:35.023Z] Total : 7480.00 29.22 0.00 0.00 0.00 0.00 0.00 00:14:33.182 00:14:33.182 true 00:14:33.182 07:20:34 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2dc7c66-4a2d-46cd-afb9-7a1ed5016a7a 00:14:33.182 07:20:34 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:33.441 07:20:35 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:33.442 07:20:35 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:33.442 07:20:35 -- target/nvmf_lvs_grow.sh@65 -- # wait 84121 00:14:34.009 [2024-11-04T07:20:35.850Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:34.009 Nvme0n1 : 3.00 7456.33 29.13 0.00 0.00 0.00 0.00 0.00 00:14:34.009 [2024-11-04T07:20:35.850Z] =================================================================================================================== 00:14:34.009 [2024-11-04T07:20:35.850Z] Total : 7456.33 29.13 0.00 0.00 0.00 0.00 0.00 00:14:34.009 00:14:35.386 [2024-11-04T07:20:37.227Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:35.386 Nvme0n1 : 4.00 7448.00 29.09 0.00 0.00 0.00 0.00 0.00 00:14:35.386 [2024-11-04T07:20:37.227Z] =================================================================================================================== 00:14:35.386 [2024-11-04T07:20:37.227Z] Total : 7448.00 29.09 0.00 0.00 0.00 0.00 0.00 00:14:35.386 00:14:35.954 [2024-11-04T07:20:37.795Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:35.954 Nvme0n1 : 5.00 7430.20 29.02 0.00 0.00 0.00 0.00 0.00 00:14:35.954 [2024-11-04T07:20:37.795Z] =================================================================================================================== 00:14:35.954 [2024-11-04T07:20:37.795Z] Total : 7430.20 29.02 0.00 0.00 0.00 0.00 0.00 00:14:35.954 00:14:37.388 [2024-11-04T07:20:39.229Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:37.388 Nvme0n1 : 6.00 7319.83 28.59 0.00 0.00 0.00 0.00 0.00 00:14:37.388 [2024-11-04T07:20:39.229Z] =================================================================================================================== 00:14:37.388 [2024-11-04T07:20:39.229Z] Total : 7319.83 28.59 0.00 0.00 0.00 0.00 0.00 00:14:37.388 00:14:37.976 [2024-11-04T07:20:39.817Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:37.976 Nvme0n1 : 7.00 7262.86 28.37 0.00 0.00 0.00 0.00 0.00 00:14:37.977 [2024-11-04T07:20:39.818Z] =================================================================================================================== 00:14:37.977 [2024-11-04T07:20:39.818Z] Total : 7262.86 28.37 0.00 0.00 0.00 0.00 0.00 00:14:37.977 00:14:39.357 [2024-11-04T07:20:41.198Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:39.357 Nvme0n1 : 8.00 7251.75 28.33 0.00 0.00 0.00 0.00 0.00 00:14:39.357 [2024-11-04T07:20:41.198Z] =================================================================================================================== 00:14:39.357 [2024-11-04T07:20:41.198Z] Total : 7251.75 28.33 0.00 0.00 0.00 0.00 0.00 00:14:39.357 00:14:40.293 [2024-11-04T07:20:42.134Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:40.293 Nvme0n1 : 9.00 7242.33 28.29 0.00 0.00 0.00 0.00 0.00 00:14:40.293 [2024-11-04T07:20:42.134Z] =================================================================================================================== 00:14:40.293 [2024-11-04T07:20:42.134Z] Total : 7242.33 28.29 0.00 0.00 0.00 0.00 0.00 00:14:40.293 00:14:41.229 [2024-11-04T07:20:43.070Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.229 Nvme0n1 : 10.00 7214.00 28.18 0.00 0.00 0.00 0.00 0.00 00:14:41.229 [2024-11-04T07:20:43.070Z] =================================================================================================================== 00:14:41.229 [2024-11-04T07:20:43.070Z] Total : 7214.00 28.18 0.00 0.00 0.00 0.00 0.00 00:14:41.229 00:14:41.229 00:14:41.229 Latency(us) 00:14:41.229 [2024-11-04T07:20:43.070Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.229 [2024-11-04T07:20:43.070Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.229 Nvme0n1 : 10.02 7215.42 28.19 0.00 0.00 17727.31 4230.05 134408.38 00:14:41.229 [2024-11-04T07:20:43.070Z] =================================================================================================================== 00:14:41.229 [2024-11-04T07:20:43.070Z] Total : 7215.42 28.19 0.00 0.00 17727.31 4230.05 134408.38 00:14:41.229 0 00:14:41.229 07:20:42 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 84079 00:14:41.229 07:20:42 -- common/autotest_common.sh@926 -- # '[' -z 84079 ']' 00:14:41.229 07:20:42 -- common/autotest_common.sh@930 -- # kill -0 84079 00:14:41.229 07:20:42 -- common/autotest_common.sh@931 -- # uname 00:14:41.229 07:20:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:41.229 07:20:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84079 00:14:41.229 07:20:42 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:41.229 07:20:42 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:41.229 killing process with pid 84079 00:14:41.229 07:20:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84079' 00:14:41.229 07:20:42 -- common/autotest_common.sh@945 -- # kill 84079 00:14:41.229 Received shutdown signal, test time was about 10.000000 seconds 00:14:41.229 00:14:41.229 Latency(us) 00:14:41.229 [2024-11-04T07:20:43.070Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.229 [2024-11-04T07:20:43.070Z] =================================================================================================================== 00:14:41.229 [2024-11-04T07:20:43.070Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:41.229 07:20:42 -- common/autotest_common.sh@950 -- # wait 84079 00:14:41.488 07:20:43 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:41.747 07:20:43 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2dc7c66-4a2d-46cd-afb9-7a1ed5016a7a 00:14:41.747 07:20:43 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:42.006 07:20:43 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:42.006 07:20:43 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:14:42.006 07:20:43 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 83473 00:14:42.006 07:20:43 -- target/nvmf_lvs_grow.sh@74 -- # wait 83473 00:14:42.006 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 83473 Killed "${NVMF_APP[@]}" "$@" 00:14:42.006 07:20:43 -- target/nvmf_lvs_grow.sh@74 -- # true 00:14:42.006 07:20:43 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:14:42.006 07:20:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:42.006 07:20:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:42.006 07:20:43 -- common/autotest_common.sh@10 -- # set +x 00:14:42.006 07:20:43 -- nvmf/common.sh@469 -- # nvmfpid=84278 00:14:42.006 07:20:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:42.006 07:20:43 -- nvmf/common.sh@470 -- # waitforlisten 84278 00:14:42.006 07:20:43 -- common/autotest_common.sh@819 -- # '[' -z 84278 ']' 00:14:42.006 07:20:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.006 07:20:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:42.006 07:20:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.006 07:20:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:42.006 07:20:43 -- common/autotest_common.sh@10 -- # set +x 00:14:42.006 [2024-11-04 07:20:43.731672] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:42.006 [2024-11-04 07:20:43.731750] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:42.265 [2024-11-04 07:20:43.867101] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.265 [2024-11-04 07:20:43.924099] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:42.265 [2024-11-04 07:20:43.924232] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:42.265 [2024-11-04 07:20:43.924244] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:42.265 [2024-11-04 07:20:43.924252] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:42.265 [2024-11-04 07:20:43.924274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.832 07:20:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:42.832 07:20:44 -- common/autotest_common.sh@852 -- # return 0 00:14:42.832 07:20:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:42.832 07:20:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:42.832 07:20:44 -- common/autotest_common.sh@10 -- # set +x 00:14:43.090 07:20:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:43.091 07:20:44 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:43.349 [2024-11-04 07:20:44.941785] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:43.349 [2024-11-04 07:20:44.942236] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:43.349 [2024-11-04 07:20:44.942599] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:43.349 07:20:44 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:14:43.349 07:20:44 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 8e8006ef-ea81-4e0e-8526-c5c15fd24711 00:14:43.349 07:20:44 -- common/autotest_common.sh@887 -- # local bdev_name=8e8006ef-ea81-4e0e-8526-c5c15fd24711 00:14:43.349 07:20:44 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:43.349 07:20:44 -- common/autotest_common.sh@889 -- # local i 00:14:43.349 07:20:44 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:43.349 07:20:44 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:43.349 07:20:44 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:43.608 07:20:45 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8e8006ef-ea81-4e0e-8526-c5c15fd24711 -t 2000 00:14:43.608 [ 00:14:43.608 { 00:14:43.608 "aliases": [ 00:14:43.608 "lvs/lvol" 00:14:43.608 ], 00:14:43.608 "assigned_rate_limits": { 00:14:43.608 "r_mbytes_per_sec": 0, 00:14:43.608 "rw_ios_per_sec": 0, 00:14:43.608 "rw_mbytes_per_sec": 0, 00:14:43.608 "w_mbytes_per_sec": 0 00:14:43.608 }, 00:14:43.608 "block_size": 4096, 00:14:43.608 "claimed": false, 00:14:43.608 "driver_specific": { 00:14:43.608 "lvol": { 00:14:43.608 "base_bdev": "aio_bdev", 00:14:43.608 "clone": false, 00:14:43.608 "esnap_clone": false, 00:14:43.608 "lvol_store_uuid": "e2dc7c66-4a2d-46cd-afb9-7a1ed5016a7a", 00:14:43.608 "snapshot": false, 00:14:43.608 "thin_provision": false 00:14:43.608 } 00:14:43.608 }, 00:14:43.608 "name": "8e8006ef-ea81-4e0e-8526-c5c15fd24711", 00:14:43.608 "num_blocks": 38912, 00:14:43.608 "product_name": "Logical Volume", 00:14:43.608 "supported_io_types": { 00:14:43.608 "abort": false, 00:14:43.608 "compare": false, 00:14:43.608 "compare_and_write": false, 00:14:43.608 "flush": false, 00:14:43.608 "nvme_admin": false, 00:14:43.608 "nvme_io": false, 00:14:43.608 "read": true, 00:14:43.608 "reset": true, 00:14:43.608 "unmap": true, 00:14:43.608 "write": true, 00:14:43.608 "write_zeroes": true 00:14:43.608 }, 00:14:43.608 "uuid": "8e8006ef-ea81-4e0e-8526-c5c15fd24711", 00:14:43.608 "zoned": false 00:14:43.608 } 00:14:43.608 ] 00:14:43.608 07:20:45 -- common/autotest_common.sh@895 -- # return 0 00:14:43.608 07:20:45 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2dc7c66-4a2d-46cd-afb9-7a1ed5016a7a 00:14:43.608 07:20:45 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:14:43.867 07:20:45 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:14:43.867 07:20:45 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:14:43.867 07:20:45 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2dc7c66-4a2d-46cd-afb9-7a1ed5016a7a 00:14:44.126 07:20:45 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:14:44.126 07:20:45 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:44.384 [2024-11-04 07:20:46.095281] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:44.384 07:20:46 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2dc7c66-4a2d-46cd-afb9-7a1ed5016a7a 00:14:44.384 07:20:46 -- common/autotest_common.sh@640 -- # local es=0 00:14:44.384 07:20:46 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2dc7c66-4a2d-46cd-afb9-7a1ed5016a7a 00:14:44.384 07:20:46 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:44.384 07:20:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:44.384 07:20:46 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:44.384 07:20:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:44.384 07:20:46 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:44.384 07:20:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:44.384 07:20:46 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:44.384 07:20:46 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:44.384 07:20:46 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2dc7c66-4a2d-46cd-afb9-7a1ed5016a7a 00:14:44.643 2024/11/04 07:20:46 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:e2dc7c66-4a2d-46cd-afb9-7a1ed5016a7a], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:44.643 request: 00:14:44.643 { 00:14:44.643 "method": "bdev_lvol_get_lvstores", 00:14:44.643 "params": { 00:14:44.643 "uuid": "e2dc7c66-4a2d-46cd-afb9-7a1ed5016a7a" 00:14:44.643 } 00:14:44.643 } 00:14:44.643 Got JSON-RPC error response 00:14:44.643 GoRPCClient: error on JSON-RPC call 00:14:44.643 07:20:46 -- common/autotest_common.sh@643 -- # es=1 00:14:44.643 07:20:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:44.643 07:20:46 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:44.643 07:20:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:44.643 07:20:46 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:44.902 aio_bdev 00:14:44.902 07:20:46 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 8e8006ef-ea81-4e0e-8526-c5c15fd24711 00:14:44.902 07:20:46 -- common/autotest_common.sh@887 -- # local bdev_name=8e8006ef-ea81-4e0e-8526-c5c15fd24711 00:14:44.902 07:20:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:44.902 07:20:46 -- common/autotest_common.sh@889 -- # local i 00:14:44.902 07:20:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:44.902 07:20:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:44.902 07:20:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:44.902 07:20:46 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8e8006ef-ea81-4e0e-8526-c5c15fd24711 -t 2000 00:14:45.161 [ 00:14:45.161 { 00:14:45.161 "aliases": [ 00:14:45.161 "lvs/lvol" 00:14:45.161 ], 00:14:45.161 "assigned_rate_limits": { 00:14:45.161 "r_mbytes_per_sec": 0, 00:14:45.161 "rw_ios_per_sec": 0, 00:14:45.161 "rw_mbytes_per_sec": 0, 00:14:45.161 "w_mbytes_per_sec": 0 00:14:45.161 }, 00:14:45.161 "block_size": 4096, 00:14:45.161 "claimed": false, 00:14:45.161 "driver_specific": { 00:14:45.161 "lvol": { 00:14:45.161 "base_bdev": "aio_bdev", 00:14:45.161 "clone": false, 00:14:45.161 "esnap_clone": false, 00:14:45.161 "lvol_store_uuid": "e2dc7c66-4a2d-46cd-afb9-7a1ed5016a7a", 00:14:45.161 "snapshot": false, 00:14:45.161 "thin_provision": false 00:14:45.161 } 00:14:45.161 }, 00:14:45.161 "name": "8e8006ef-ea81-4e0e-8526-c5c15fd24711", 00:14:45.161 "num_blocks": 38912, 00:14:45.161 "product_name": "Logical Volume", 00:14:45.161 "supported_io_types": { 00:14:45.161 "abort": false, 00:14:45.161 "compare": false, 00:14:45.161 "compare_and_write": false, 00:14:45.161 "flush": false, 00:14:45.161 "nvme_admin": false, 00:14:45.161 "nvme_io": false, 00:14:45.161 "read": true, 00:14:45.161 "reset": true, 00:14:45.161 "unmap": true, 00:14:45.161 "write": true, 00:14:45.161 "write_zeroes": true 00:14:45.161 }, 00:14:45.161 "uuid": "8e8006ef-ea81-4e0e-8526-c5c15fd24711", 00:14:45.161 "zoned": false 00:14:45.161 } 00:14:45.161 ] 00:14:45.161 07:20:46 -- common/autotest_common.sh@895 -- # return 0 00:14:45.161 07:20:46 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2dc7c66-4a2d-46cd-afb9-7a1ed5016a7a 00:14:45.161 07:20:46 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:45.420 07:20:47 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:45.420 07:20:47 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2dc7c66-4a2d-46cd-afb9-7a1ed5016a7a 00:14:45.420 07:20:47 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:45.678 07:20:47 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:45.678 07:20:47 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 8e8006ef-ea81-4e0e-8526-c5c15fd24711 00:14:45.937 07:20:47 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e2dc7c66-4a2d-46cd-afb9-7a1ed5016a7a 00:14:46.195 07:20:47 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:46.454 07:20:48 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:46.712 00:14:46.712 real 0m19.813s 00:14:46.712 user 0m38.656s 00:14:46.712 sys 0m10.284s 00:14:46.712 07:20:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:46.712 07:20:48 -- common/autotest_common.sh@10 -- # set +x 00:14:46.712 ************************************ 00:14:46.712 END TEST lvs_grow_dirty 00:14:46.712 ************************************ 00:14:46.712 07:20:48 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:46.712 07:20:48 -- common/autotest_common.sh@796 -- # type=--id 00:14:46.712 07:20:48 -- common/autotest_common.sh@797 -- # id=0 00:14:46.712 07:20:48 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:14:46.712 07:20:48 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:46.971 07:20:48 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:14:46.971 07:20:48 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:14:46.971 07:20:48 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:14:46.971 07:20:48 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:46.971 nvmf_trace.0 00:14:46.971 07:20:48 -- common/autotest_common.sh@811 -- # return 0 00:14:46.971 07:20:48 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:46.971 07:20:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:46.971 07:20:48 -- nvmf/common.sh@116 -- # sync 00:14:46.971 07:20:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:46.971 07:20:48 -- nvmf/common.sh@119 -- # set +e 00:14:46.971 07:20:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:46.971 07:20:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:46.971 rmmod nvme_tcp 00:14:46.971 rmmod nvme_fabrics 00:14:46.971 rmmod nvme_keyring 00:14:46.971 07:20:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:46.971 07:20:48 -- nvmf/common.sh@123 -- # set -e 00:14:46.971 07:20:48 -- nvmf/common.sh@124 -- # return 0 00:14:46.971 07:20:48 -- nvmf/common.sh@477 -- # '[' -n 84278 ']' 00:14:46.971 07:20:48 -- nvmf/common.sh@478 -- # killprocess 84278 00:14:46.971 07:20:48 -- common/autotest_common.sh@926 -- # '[' -z 84278 ']' 00:14:46.971 07:20:48 -- common/autotest_common.sh@930 -- # kill -0 84278 00:14:47.230 07:20:48 -- common/autotest_common.sh@931 -- # uname 00:14:47.230 07:20:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:47.230 07:20:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84278 00:14:47.230 07:20:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:47.230 07:20:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:47.230 killing process with pid 84278 00:14:47.230 07:20:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84278' 00:14:47.230 07:20:48 -- common/autotest_common.sh@945 -- # kill 84278 00:14:47.230 07:20:48 -- common/autotest_common.sh@950 -- # wait 84278 00:14:47.230 07:20:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:47.230 07:20:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:47.230 07:20:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:47.230 07:20:49 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:47.230 07:20:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:47.230 07:20:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.230 07:20:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:47.230 07:20:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.230 07:20:49 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:47.230 ************************************ 00:14:47.230 END TEST nvmf_lvs_grow 00:14:47.230 ************************************ 00:14:47.230 00:14:47.230 real 0m40.243s 00:14:47.230 user 1m1.893s 00:14:47.230 sys 0m13.190s 00:14:47.230 07:20:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:47.230 07:20:49 -- common/autotest_common.sh@10 -- # set +x 00:14:47.489 07:20:49 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:47.489 07:20:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:47.489 07:20:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:47.489 07:20:49 -- common/autotest_common.sh@10 -- # set +x 00:14:47.489 ************************************ 00:14:47.489 START TEST nvmf_bdev_io_wait 00:14:47.489 ************************************ 00:14:47.489 07:20:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:47.489 * Looking for test storage... 00:14:47.489 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:47.489 07:20:49 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:47.489 07:20:49 -- nvmf/common.sh@7 -- # uname -s 00:14:47.489 07:20:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:47.489 07:20:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:47.489 07:20:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:47.489 07:20:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:47.489 07:20:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:47.489 07:20:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:47.489 07:20:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:47.489 07:20:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:47.489 07:20:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:47.489 07:20:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:47.489 07:20:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:14:47.489 07:20:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:14:47.489 07:20:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:47.489 07:20:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:47.489 07:20:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:47.489 07:20:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:47.489 07:20:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:47.489 07:20:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:47.489 07:20:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:47.489 07:20:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.489 07:20:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.489 07:20:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.489 07:20:49 -- paths/export.sh@5 -- # export PATH 00:14:47.489 07:20:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.489 07:20:49 -- nvmf/common.sh@46 -- # : 0 00:14:47.489 07:20:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:47.489 07:20:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:47.489 07:20:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:47.489 07:20:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:47.489 07:20:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:47.489 07:20:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:47.489 07:20:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:47.489 07:20:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:47.489 07:20:49 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:47.489 07:20:49 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:47.489 07:20:49 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:47.489 07:20:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:47.489 07:20:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:47.489 07:20:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:47.489 07:20:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:47.489 07:20:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:47.489 07:20:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.489 07:20:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:47.489 07:20:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.489 07:20:49 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:47.489 07:20:49 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:47.489 07:20:49 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:47.489 07:20:49 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:47.489 07:20:49 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:47.489 07:20:49 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:47.489 07:20:49 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:47.489 07:20:49 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:47.489 07:20:49 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:47.489 07:20:49 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:47.489 07:20:49 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:47.489 07:20:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:47.489 07:20:49 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:47.489 07:20:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:47.489 07:20:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:47.489 07:20:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:47.489 07:20:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:47.489 07:20:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:47.489 07:20:49 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:47.489 07:20:49 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:47.489 Cannot find device "nvmf_tgt_br" 00:14:47.489 07:20:49 -- nvmf/common.sh@154 -- # true 00:14:47.489 07:20:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:47.489 Cannot find device "nvmf_tgt_br2" 00:14:47.489 07:20:49 -- nvmf/common.sh@155 -- # true 00:14:47.489 07:20:49 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:47.490 07:20:49 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:47.490 Cannot find device "nvmf_tgt_br" 00:14:47.490 07:20:49 -- nvmf/common.sh@157 -- # true 00:14:47.490 07:20:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:47.490 Cannot find device "nvmf_tgt_br2" 00:14:47.490 07:20:49 -- nvmf/common.sh@158 -- # true 00:14:47.490 07:20:49 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:47.748 07:20:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:47.748 07:20:49 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:47.748 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:47.748 07:20:49 -- nvmf/common.sh@161 -- # true 00:14:47.748 07:20:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:47.748 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:47.748 07:20:49 -- nvmf/common.sh@162 -- # true 00:14:47.748 07:20:49 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:47.748 07:20:49 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:47.748 07:20:49 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:47.748 07:20:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:47.748 07:20:49 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:47.748 07:20:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:47.748 07:20:49 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:47.748 07:20:49 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:47.748 07:20:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:47.748 07:20:49 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:47.748 07:20:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:47.748 07:20:49 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:47.748 07:20:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:47.748 07:20:49 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:47.748 07:20:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:47.748 07:20:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:47.748 07:20:49 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:47.748 07:20:49 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:47.748 07:20:49 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:47.748 07:20:49 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:47.748 07:20:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:48.007 07:20:49 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:48.007 07:20:49 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:48.007 07:20:49 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:48.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:48.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:14:48.007 00:14:48.007 --- 10.0.0.2 ping statistics --- 00:14:48.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.007 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:14:48.007 07:20:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:48.007 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:48.007 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:14:48.007 00:14:48.007 --- 10.0.0.3 ping statistics --- 00:14:48.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.007 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:14:48.007 07:20:49 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:48.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:48.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:14:48.007 00:14:48.007 --- 10.0.0.1 ping statistics --- 00:14:48.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.007 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:14:48.007 07:20:49 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:48.007 07:20:49 -- nvmf/common.sh@421 -- # return 0 00:14:48.007 07:20:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:48.007 07:20:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:48.007 07:20:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:48.007 07:20:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:48.007 07:20:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:48.007 07:20:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:48.007 07:20:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:48.007 07:20:49 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:48.007 07:20:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:48.007 07:20:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:48.007 07:20:49 -- common/autotest_common.sh@10 -- # set +x 00:14:48.007 07:20:49 -- nvmf/common.sh@469 -- # nvmfpid=84686 00:14:48.007 07:20:49 -- nvmf/common.sh@470 -- # waitforlisten 84686 00:14:48.007 07:20:49 -- common/autotest_common.sh@819 -- # '[' -z 84686 ']' 00:14:48.007 07:20:49 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:48.007 07:20:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.007 07:20:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:48.007 07:20:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.007 07:20:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:48.007 07:20:49 -- common/autotest_common.sh@10 -- # set +x 00:14:48.007 [2024-11-04 07:20:49.691098] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:48.007 [2024-11-04 07:20:49.691173] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.007 [2024-11-04 07:20:49.823944] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:48.265 [2024-11-04 07:20:49.884220] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:48.265 [2024-11-04 07:20:49.884374] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:48.265 [2024-11-04 07:20:49.884387] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:48.265 [2024-11-04 07:20:49.884395] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:48.265 [2024-11-04 07:20:49.884554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.265 [2024-11-04 07:20:49.884720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:48.265 [2024-11-04 07:20:49.885478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:48.265 [2024-11-04 07:20:49.885527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.265 07:20:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:48.265 07:20:49 -- common/autotest_common.sh@852 -- # return 0 00:14:48.265 07:20:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:48.265 07:20:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:48.265 07:20:49 -- common/autotest_common.sh@10 -- # set +x 00:14:48.265 07:20:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:48.265 07:20:49 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:48.265 07:20:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.265 07:20:49 -- common/autotest_common.sh@10 -- # set +x 00:14:48.265 07:20:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.265 07:20:50 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:48.265 07:20:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.265 07:20:50 -- common/autotest_common.sh@10 -- # set +x 00:14:48.265 07:20:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.265 07:20:50 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:48.266 07:20:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.266 07:20:50 -- common/autotest_common.sh@10 -- # set +x 00:14:48.266 [2024-11-04 07:20:50.077156] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:48.266 07:20:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.266 07:20:50 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:48.266 07:20:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.266 07:20:50 -- common/autotest_common.sh@10 -- # set +x 00:14:48.525 Malloc0 00:14:48.525 07:20:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.525 07:20:50 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:48.525 07:20:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.525 07:20:50 -- common/autotest_common.sh@10 -- # set +x 00:14:48.525 07:20:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.525 07:20:50 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:48.525 07:20:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.525 07:20:50 -- common/autotest_common.sh@10 -- # set +x 00:14:48.525 07:20:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.525 07:20:50 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:48.525 07:20:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.525 07:20:50 -- common/autotest_common.sh@10 -- # set +x 00:14:48.525 [2024-11-04 07:20:50.136609] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:48.525 07:20:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.525 07:20:50 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=84729 00:14:48.525 07:20:50 -- target/bdev_io_wait.sh@30 -- # READ_PID=84732 00:14:48.525 07:20:50 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:48.525 07:20:50 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:48.525 07:20:50 -- nvmf/common.sh@520 -- # config=() 00:14:48.525 07:20:50 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:48.525 07:20:50 -- nvmf/common.sh@520 -- # local subsystem config 00:14:48.525 07:20:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:48.525 07:20:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:48.525 { 00:14:48.525 "params": { 00:14:48.525 "name": "Nvme$subsystem", 00:14:48.525 "trtype": "$TEST_TRANSPORT", 00:14:48.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:48.525 "adrfam": "ipv4", 00:14:48.525 "trsvcid": "$NVMF_PORT", 00:14:48.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:48.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:48.525 "hdgst": ${hdgst:-false}, 00:14:48.525 "ddgst": ${ddgst:-false} 00:14:48.525 }, 00:14:48.525 "method": "bdev_nvme_attach_controller" 00:14:48.525 } 00:14:48.525 EOF 00:14:48.525 )") 00:14:48.525 07:20:50 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:48.525 07:20:50 -- nvmf/common.sh@520 -- # config=() 00:14:48.525 07:20:50 -- nvmf/common.sh@520 -- # local subsystem config 00:14:48.525 07:20:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:48.525 07:20:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:48.525 { 00:14:48.525 "params": { 00:14:48.525 "name": "Nvme$subsystem", 00:14:48.525 "trtype": "$TEST_TRANSPORT", 00:14:48.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:48.525 "adrfam": "ipv4", 00:14:48.525 "trsvcid": "$NVMF_PORT", 00:14:48.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:48.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:48.525 "hdgst": ${hdgst:-false}, 00:14:48.525 "ddgst": ${ddgst:-false} 00:14:48.525 }, 00:14:48.525 "method": "bdev_nvme_attach_controller" 00:14:48.525 } 00:14:48.525 EOF 00:14:48.525 )") 00:14:48.525 07:20:50 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=84734 00:14:48.525 07:20:50 -- nvmf/common.sh@542 -- # cat 00:14:48.525 07:20:50 -- nvmf/common.sh@542 -- # cat 00:14:48.525 07:20:50 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:48.525 07:20:50 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=84738 00:14:48.525 07:20:50 -- target/bdev_io_wait.sh@35 -- # sync 00:14:48.525 07:20:50 -- nvmf/common.sh@544 -- # jq . 00:14:48.525 07:20:50 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:48.525 07:20:50 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:48.525 07:20:50 -- nvmf/common.sh@520 -- # config=() 00:14:48.525 07:20:50 -- nvmf/common.sh@520 -- # local subsystem config 00:14:48.525 07:20:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:48.525 07:20:50 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:48.525 07:20:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:48.525 { 00:14:48.525 "params": { 00:14:48.525 "name": "Nvme$subsystem", 00:14:48.525 "trtype": "$TEST_TRANSPORT", 00:14:48.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:48.525 "adrfam": "ipv4", 00:14:48.525 "trsvcid": "$NVMF_PORT", 00:14:48.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:48.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:48.525 "hdgst": ${hdgst:-false}, 00:14:48.525 "ddgst": ${ddgst:-false} 00:14:48.525 }, 00:14:48.525 "method": "bdev_nvme_attach_controller" 00:14:48.525 } 00:14:48.525 EOF 00:14:48.525 )") 00:14:48.525 07:20:50 -- nvmf/common.sh@545 -- # IFS=, 00:14:48.525 07:20:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:48.525 "params": { 00:14:48.525 "name": "Nvme1", 00:14:48.525 "trtype": "tcp", 00:14:48.525 "traddr": "10.0.0.2", 00:14:48.525 "adrfam": "ipv4", 00:14:48.525 "trsvcid": "4420", 00:14:48.525 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:48.525 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:48.525 "hdgst": false, 00:14:48.525 "ddgst": false 00:14:48.525 }, 00:14:48.525 "method": "bdev_nvme_attach_controller" 00:14:48.525 }' 00:14:48.525 07:20:50 -- nvmf/common.sh@520 -- # config=() 00:14:48.525 07:20:50 -- nvmf/common.sh@520 -- # local subsystem config 00:14:48.525 07:20:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:48.525 07:20:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:48.525 { 00:14:48.525 "params": { 00:14:48.525 "name": "Nvme$subsystem", 00:14:48.525 "trtype": "$TEST_TRANSPORT", 00:14:48.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:48.525 "adrfam": "ipv4", 00:14:48.525 "trsvcid": "$NVMF_PORT", 00:14:48.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:48.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:48.525 "hdgst": ${hdgst:-false}, 00:14:48.525 "ddgst": ${ddgst:-false} 00:14:48.525 }, 00:14:48.525 "method": "bdev_nvme_attach_controller" 00:14:48.525 } 00:14:48.525 EOF 00:14:48.525 )") 00:14:48.525 07:20:50 -- nvmf/common.sh@542 -- # cat 00:14:48.525 07:20:50 -- nvmf/common.sh@542 -- # cat 00:14:48.525 07:20:50 -- nvmf/common.sh@544 -- # jq . 00:14:48.525 07:20:50 -- nvmf/common.sh@544 -- # jq . 00:14:48.525 07:20:50 -- nvmf/common.sh@545 -- # IFS=, 00:14:48.525 07:20:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:48.525 "params": { 00:14:48.525 "name": "Nvme1", 00:14:48.525 "trtype": "tcp", 00:14:48.525 "traddr": "10.0.0.2", 00:14:48.525 "adrfam": "ipv4", 00:14:48.525 "trsvcid": "4420", 00:14:48.526 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:48.526 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:48.526 "hdgst": false, 00:14:48.526 "ddgst": false 00:14:48.526 }, 00:14:48.526 "method": "bdev_nvme_attach_controller" 00:14:48.526 }' 00:14:48.526 07:20:50 -- nvmf/common.sh@545 -- # IFS=, 00:14:48.526 07:20:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:48.526 "params": { 00:14:48.526 "name": "Nvme1", 00:14:48.526 "trtype": "tcp", 00:14:48.526 "traddr": "10.0.0.2", 00:14:48.526 "adrfam": "ipv4", 00:14:48.526 "trsvcid": "4420", 00:14:48.526 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:48.526 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:48.526 "hdgst": false, 00:14:48.526 "ddgst": false 00:14:48.526 }, 00:14:48.526 "method": "bdev_nvme_attach_controller" 00:14:48.526 }' 00:14:48.526 07:20:50 -- nvmf/common.sh@544 -- # jq . 00:14:48.526 07:20:50 -- nvmf/common.sh@545 -- # IFS=, 00:14:48.526 07:20:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:48.526 "params": { 00:14:48.526 "name": "Nvme1", 00:14:48.526 "trtype": "tcp", 00:14:48.526 "traddr": "10.0.0.2", 00:14:48.526 "adrfam": "ipv4", 00:14:48.526 "trsvcid": "4420", 00:14:48.526 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:48.526 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:48.526 "hdgst": false, 00:14:48.526 "ddgst": false 00:14:48.526 }, 00:14:48.526 "method": "bdev_nvme_attach_controller" 00:14:48.526 }' 00:14:48.526 [2024-11-04 07:20:50.199957] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:48.526 [2024-11-04 07:20:50.200045] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:48.526 07:20:50 -- target/bdev_io_wait.sh@37 -- # wait 84729 00:14:48.526 [2024-11-04 07:20:50.221829] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:48.526 [2024-11-04 07:20:50.221933] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:48.526 [2024-11-04 07:20:50.222790] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:48.526 [2024-11-04 07:20:50.223074] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:48.526 [2024-11-04 07:20:50.229538] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:48.526 [2024-11-04 07:20:50.229609] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:48.785 [2024-11-04 07:20:50.440611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.785 [2024-11-04 07:20:50.517347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:48.785 [2024-11-04 07:20:50.540081] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.785 [2024-11-04 07:20:50.614474] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.044 [2024-11-04 07:20:50.637056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:14:49.044 [2024-11-04 07:20:50.692357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:49.044 [2024-11-04 07:20:50.705485] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.044 Running I/O for 1 seconds... 00:14:49.044 [2024-11-04 07:20:50.799891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:49.044 Running I/O for 1 seconds... 00:14:49.044 Running I/O for 1 seconds... 00:14:49.303 Running I/O for 1 seconds... 00:14:50.238 00:14:50.238 Latency(us) 00:14:50.238 [2024-11-04T07:20:52.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.238 [2024-11-04T07:20:52.079Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:50.238 Nvme1n1 : 1.02 7108.93 27.77 0.00 0.00 17910.79 8221.79 28478.37 00:14:50.238 [2024-11-04T07:20:52.079Z] =================================================================================================================== 00:14:50.238 [2024-11-04T07:20:52.079Z] Total : 7108.93 27.77 0.00 0.00 17910.79 8221.79 28478.37 00:14:50.238 00:14:50.238 Latency(us) 00:14:50.238 [2024-11-04T07:20:52.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.238 [2024-11-04T07:20:52.079Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:50.238 Nvme1n1 : 1.00 6977.24 27.25 0.00 0.00 18292.96 4944.99 38368.35 00:14:50.239 [2024-11-04T07:20:52.080Z] =================================================================================================================== 00:14:50.239 [2024-11-04T07:20:52.080Z] Total : 6977.24 27.25 0.00 0.00 18292.96 4944.99 38368.35 00:14:50.239 00:14:50.239 Latency(us) 00:14:50.239 [2024-11-04T07:20:52.080Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.239 [2024-11-04T07:20:52.080Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:50.239 Nvme1n1 : 1.01 8461.63 33.05 0.00 0.00 15056.81 8579.26 25141.99 00:14:50.239 [2024-11-04T07:20:52.080Z] =================================================================================================================== 00:14:50.239 [2024-11-04T07:20:52.080Z] Total : 8461.63 33.05 0.00 0.00 15056.81 8579.26 25141.99 00:14:50.239 07:20:51 -- target/bdev_io_wait.sh@38 -- # wait 84732 00:14:50.239 00:14:50.239 Latency(us) 00:14:50.239 [2024-11-04T07:20:52.080Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.239 [2024-11-04T07:20:52.080Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:50.239 Nvme1n1 : 1.00 233923.93 913.77 0.00 0.00 545.58 229.93 748.45 00:14:50.239 [2024-11-04T07:20:52.080Z] =================================================================================================================== 00:14:50.239 [2024-11-04T07:20:52.080Z] Total : 233923.93 913.77 0.00 0.00 545.58 229.93 748.45 00:14:50.239 07:20:52 -- target/bdev_io_wait.sh@39 -- # wait 84734 00:14:50.497 07:20:52 -- target/bdev_io_wait.sh@40 -- # wait 84738 00:14:50.497 07:20:52 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:50.497 07:20:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.497 07:20:52 -- common/autotest_common.sh@10 -- # set +x 00:14:50.756 07:20:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.756 07:20:52 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:50.756 07:20:52 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:50.756 07:20:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:50.756 07:20:52 -- nvmf/common.sh@116 -- # sync 00:14:50.756 07:20:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:50.756 07:20:52 -- nvmf/common.sh@119 -- # set +e 00:14:50.756 07:20:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:50.756 07:20:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:50.756 rmmod nvme_tcp 00:14:50.756 rmmod nvme_fabrics 00:14:50.756 rmmod nvme_keyring 00:14:50.756 07:20:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:50.756 07:20:52 -- nvmf/common.sh@123 -- # set -e 00:14:50.756 07:20:52 -- nvmf/common.sh@124 -- # return 0 00:14:50.756 07:20:52 -- nvmf/common.sh@477 -- # '[' -n 84686 ']' 00:14:50.756 07:20:52 -- nvmf/common.sh@478 -- # killprocess 84686 00:14:50.756 07:20:52 -- common/autotest_common.sh@926 -- # '[' -z 84686 ']' 00:14:50.756 07:20:52 -- common/autotest_common.sh@930 -- # kill -0 84686 00:14:50.756 07:20:52 -- common/autotest_common.sh@931 -- # uname 00:14:50.756 07:20:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:50.756 07:20:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84686 00:14:50.756 07:20:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:50.756 07:20:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:50.756 07:20:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84686' 00:14:50.756 killing process with pid 84686 00:14:50.756 07:20:52 -- common/autotest_common.sh@945 -- # kill 84686 00:14:50.756 07:20:52 -- common/autotest_common.sh@950 -- # wait 84686 00:14:51.014 07:20:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:51.014 07:20:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:51.014 07:20:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:51.014 07:20:52 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:51.014 07:20:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:51.014 07:20:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.014 07:20:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:51.014 07:20:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.014 07:20:52 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:51.014 00:14:51.014 real 0m3.554s 00:14:51.014 user 0m16.110s 00:14:51.014 sys 0m1.965s 00:14:51.014 07:20:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:51.014 07:20:52 -- common/autotest_common.sh@10 -- # set +x 00:14:51.014 ************************************ 00:14:51.014 END TEST nvmf_bdev_io_wait 00:14:51.014 ************************************ 00:14:51.014 07:20:52 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:51.014 07:20:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:51.014 07:20:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:51.014 07:20:52 -- common/autotest_common.sh@10 -- # set +x 00:14:51.014 ************************************ 00:14:51.014 START TEST nvmf_queue_depth 00:14:51.014 ************************************ 00:14:51.014 07:20:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:51.014 * Looking for test storage... 00:14:51.014 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:51.014 07:20:52 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:51.014 07:20:52 -- nvmf/common.sh@7 -- # uname -s 00:14:51.014 07:20:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:51.014 07:20:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:51.014 07:20:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:51.014 07:20:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:51.014 07:20:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:51.014 07:20:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:51.014 07:20:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:51.014 07:20:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:51.014 07:20:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:51.014 07:20:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:51.014 07:20:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:14:51.014 07:20:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:14:51.014 07:20:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:51.014 07:20:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:51.014 07:20:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:51.014 07:20:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:51.014 07:20:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:51.014 07:20:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:51.014 07:20:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:51.014 07:20:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.014 07:20:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.014 07:20:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.014 07:20:52 -- paths/export.sh@5 -- # export PATH 00:14:51.014 07:20:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.014 07:20:52 -- nvmf/common.sh@46 -- # : 0 00:14:51.014 07:20:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:51.014 07:20:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:51.014 07:20:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:51.014 07:20:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:51.014 07:20:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:51.014 07:20:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:51.014 07:20:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:51.014 07:20:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:51.014 07:20:52 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:51.014 07:20:52 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:51.014 07:20:52 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:51.014 07:20:52 -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:51.014 07:20:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:51.014 07:20:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:51.014 07:20:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:51.014 07:20:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:51.014 07:20:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:51.014 07:20:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.014 07:20:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:51.014 07:20:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.014 07:20:52 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:51.014 07:20:52 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:51.014 07:20:52 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:51.014 07:20:52 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:51.014 07:20:52 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:51.014 07:20:52 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:51.014 07:20:52 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:51.014 07:20:52 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:51.014 07:20:52 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:51.014 07:20:52 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:51.014 07:20:52 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:51.014 07:20:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:51.014 07:20:52 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:51.014 07:20:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:51.014 07:20:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:51.014 07:20:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:51.014 07:20:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:51.014 07:20:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:51.014 07:20:52 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:51.332 07:20:52 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:51.332 Cannot find device "nvmf_tgt_br" 00:14:51.332 07:20:52 -- nvmf/common.sh@154 -- # true 00:14:51.332 07:20:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:51.332 Cannot find device "nvmf_tgt_br2" 00:14:51.332 07:20:52 -- nvmf/common.sh@155 -- # true 00:14:51.332 07:20:52 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:51.332 07:20:52 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:51.332 Cannot find device "nvmf_tgt_br" 00:14:51.332 07:20:52 -- nvmf/common.sh@157 -- # true 00:14:51.332 07:20:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:51.332 Cannot find device "nvmf_tgt_br2" 00:14:51.332 07:20:52 -- nvmf/common.sh@158 -- # true 00:14:51.332 07:20:52 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:51.332 07:20:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:51.332 07:20:52 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:51.332 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:51.332 07:20:52 -- nvmf/common.sh@161 -- # true 00:14:51.332 07:20:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:51.332 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:51.332 07:20:52 -- nvmf/common.sh@162 -- # true 00:14:51.332 07:20:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:51.332 07:20:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:51.332 07:20:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:51.332 07:20:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:51.332 07:20:53 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:51.332 07:20:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:51.332 07:20:53 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:51.332 07:20:53 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:51.332 07:20:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:51.332 07:20:53 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:51.332 07:20:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:51.332 07:20:53 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:51.332 07:20:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:51.332 07:20:53 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:51.332 07:20:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:51.332 07:20:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:51.332 07:20:53 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:51.332 07:20:53 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:51.332 07:20:53 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:51.332 07:20:53 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:51.332 07:20:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:51.613 07:20:53 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:51.613 07:20:53 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:51.613 07:20:53 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:51.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:51.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:14:51.613 00:14:51.613 --- 10.0.0.2 ping statistics --- 00:14:51.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.613 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:14:51.613 07:20:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:51.613 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:51.613 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:14:51.613 00:14:51.613 --- 10.0.0.3 ping statistics --- 00:14:51.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.613 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:14:51.613 07:20:53 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:51.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:51.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:14:51.613 00:14:51.613 --- 10.0.0.1 ping statistics --- 00:14:51.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.613 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:14:51.613 07:20:53 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:51.613 07:20:53 -- nvmf/common.sh@421 -- # return 0 00:14:51.613 07:20:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:51.613 07:20:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:51.613 07:20:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:51.613 07:20:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:51.613 07:20:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:51.613 07:20:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:51.613 07:20:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:51.613 07:20:53 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:51.613 07:20:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:51.613 07:20:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:51.613 07:20:53 -- common/autotest_common.sh@10 -- # set +x 00:14:51.613 07:20:53 -- nvmf/common.sh@469 -- # nvmfpid=84942 00:14:51.613 07:20:53 -- nvmf/common.sh@470 -- # waitforlisten 84942 00:14:51.613 07:20:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:51.613 07:20:53 -- common/autotest_common.sh@819 -- # '[' -z 84942 ']' 00:14:51.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.613 07:20:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.613 07:20:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:51.613 07:20:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.613 07:20:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:51.613 07:20:53 -- common/autotest_common.sh@10 -- # set +x 00:14:51.613 [2024-11-04 07:20:53.260564] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:51.613 [2024-11-04 07:20:53.260638] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.613 [2024-11-04 07:20:53.395496] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.872 [2024-11-04 07:20:53.468278] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:51.872 [2024-11-04 07:20:53.468420] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:51.872 [2024-11-04 07:20:53.468432] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:51.872 [2024-11-04 07:20:53.468441] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:51.872 [2024-11-04 07:20:53.468468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.439 07:20:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:52.439 07:20:54 -- common/autotest_common.sh@852 -- # return 0 00:14:52.439 07:20:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:52.439 07:20:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:52.439 07:20:54 -- common/autotest_common.sh@10 -- # set +x 00:14:52.439 07:20:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:52.439 07:20:54 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:52.439 07:20:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:52.439 07:20:54 -- common/autotest_common.sh@10 -- # set +x 00:14:52.439 [2024-11-04 07:20:54.194430] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:52.439 07:20:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:52.439 07:20:54 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:52.439 07:20:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:52.439 07:20:54 -- common/autotest_common.sh@10 -- # set +x 00:14:52.439 Malloc0 00:14:52.439 07:20:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:52.439 07:20:54 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:52.439 07:20:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:52.439 07:20:54 -- common/autotest_common.sh@10 -- # set +x 00:14:52.439 07:20:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:52.439 07:20:54 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:52.439 07:20:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:52.439 07:20:54 -- common/autotest_common.sh@10 -- # set +x 00:14:52.439 07:20:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:52.439 07:20:54 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:52.439 07:20:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:52.439 07:20:54 -- common/autotest_common.sh@10 -- # set +x 00:14:52.439 [2024-11-04 07:20:54.269902] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:52.439 07:20:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:52.439 07:20:54 -- target/queue_depth.sh@30 -- # bdevperf_pid=84992 00:14:52.439 07:20:54 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:52.439 07:20:54 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:52.439 07:20:54 -- target/queue_depth.sh@33 -- # waitforlisten 84992 /var/tmp/bdevperf.sock 00:14:52.439 07:20:54 -- common/autotest_common.sh@819 -- # '[' -z 84992 ']' 00:14:52.439 07:20:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:52.439 07:20:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:52.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:52.439 07:20:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:52.439 07:20:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:52.439 07:20:54 -- common/autotest_common.sh@10 -- # set +x 00:14:52.698 [2024-11-04 07:20:54.313556] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:52.698 [2024-11-04 07:20:54.313637] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84992 ] 00:14:52.698 [2024-11-04 07:20:54.449663] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.698 [2024-11-04 07:20:54.517980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.635 07:20:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:53.635 07:20:55 -- common/autotest_common.sh@852 -- # return 0 00:14:53.635 07:20:55 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:53.635 07:20:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.635 07:20:55 -- common/autotest_common.sh@10 -- # set +x 00:14:53.635 NVMe0n1 00:14:53.635 07:20:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.635 07:20:55 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:53.635 Running I/O for 10 seconds... 00:15:05.846 00:15:05.846 Latency(us) 00:15:05.846 [2024-11-04T07:21:07.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.846 [2024-11-04T07:21:07.687Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:05.846 Verification LBA range: start 0x0 length 0x4000 00:15:05.846 NVMe0n1 : 10.05 16665.95 65.10 0.00 0.00 61243.25 12928.47 50045.67 00:15:05.846 [2024-11-04T07:21:07.687Z] =================================================================================================================== 00:15:05.846 [2024-11-04T07:21:07.687Z] Total : 16665.95 65.10 0.00 0.00 61243.25 12928.47 50045.67 00:15:05.846 0 00:15:05.846 07:21:05 -- target/queue_depth.sh@39 -- # killprocess 84992 00:15:05.846 07:21:05 -- common/autotest_common.sh@926 -- # '[' -z 84992 ']' 00:15:05.846 07:21:05 -- common/autotest_common.sh@930 -- # kill -0 84992 00:15:05.846 07:21:05 -- common/autotest_common.sh@931 -- # uname 00:15:05.846 07:21:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:05.846 07:21:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84992 00:15:05.846 07:21:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:05.846 07:21:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:05.846 killing process with pid 84992 00:15:05.846 07:21:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84992' 00:15:05.846 Received shutdown signal, test time was about 10.000000 seconds 00:15:05.846 00:15:05.846 Latency(us) 00:15:05.846 [2024-11-04T07:21:07.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.846 [2024-11-04T07:21:07.687Z] =================================================================================================================== 00:15:05.846 [2024-11-04T07:21:07.687Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:05.846 07:21:05 -- common/autotest_common.sh@945 -- # kill 84992 00:15:05.846 07:21:05 -- common/autotest_common.sh@950 -- # wait 84992 00:15:05.846 07:21:05 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:05.846 07:21:05 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:05.846 07:21:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:05.846 07:21:05 -- nvmf/common.sh@116 -- # sync 00:15:05.846 07:21:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:05.846 07:21:05 -- nvmf/common.sh@119 -- # set +e 00:15:05.846 07:21:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:05.846 07:21:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:05.846 rmmod nvme_tcp 00:15:05.846 rmmod nvme_fabrics 00:15:05.846 rmmod nvme_keyring 00:15:05.846 07:21:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:05.846 07:21:05 -- nvmf/common.sh@123 -- # set -e 00:15:05.846 07:21:05 -- nvmf/common.sh@124 -- # return 0 00:15:05.846 07:21:05 -- nvmf/common.sh@477 -- # '[' -n 84942 ']' 00:15:05.846 07:21:05 -- nvmf/common.sh@478 -- # killprocess 84942 00:15:05.846 07:21:05 -- common/autotest_common.sh@926 -- # '[' -z 84942 ']' 00:15:05.846 07:21:05 -- common/autotest_common.sh@930 -- # kill -0 84942 00:15:05.846 07:21:05 -- common/autotest_common.sh@931 -- # uname 00:15:05.846 07:21:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:05.846 07:21:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84942 00:15:05.846 07:21:05 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:05.846 07:21:05 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:05.846 killing process with pid 84942 00:15:05.846 07:21:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84942' 00:15:05.846 07:21:05 -- common/autotest_common.sh@945 -- # kill 84942 00:15:05.846 07:21:05 -- common/autotest_common.sh@950 -- # wait 84942 00:15:05.846 07:21:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:05.846 07:21:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:05.846 07:21:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:05.846 07:21:06 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:05.846 07:21:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:05.846 07:21:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.846 07:21:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:05.846 07:21:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.846 07:21:06 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:05.846 00:15:05.846 real 0m13.440s 00:15:05.846 user 0m22.086s 00:15:05.846 sys 0m2.616s 00:15:05.846 07:21:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:05.846 07:21:06 -- common/autotest_common.sh@10 -- # set +x 00:15:05.846 ************************************ 00:15:05.846 END TEST nvmf_queue_depth 00:15:05.846 ************************************ 00:15:05.846 07:21:06 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:05.846 07:21:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:05.846 07:21:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:05.846 07:21:06 -- common/autotest_common.sh@10 -- # set +x 00:15:05.846 ************************************ 00:15:05.846 START TEST nvmf_multipath 00:15:05.846 ************************************ 00:15:05.846 07:21:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:05.846 * Looking for test storage... 00:15:05.846 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:05.846 07:21:06 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:05.846 07:21:06 -- nvmf/common.sh@7 -- # uname -s 00:15:05.846 07:21:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:05.846 07:21:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:05.846 07:21:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:05.846 07:21:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:05.846 07:21:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:05.846 07:21:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:05.846 07:21:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:05.846 07:21:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:05.846 07:21:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:05.846 07:21:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:05.846 07:21:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:15:05.846 07:21:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:15:05.846 07:21:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:05.846 07:21:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:05.846 07:21:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:05.846 07:21:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:05.846 07:21:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:05.846 07:21:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:05.846 07:21:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:05.847 07:21:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.847 07:21:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.847 07:21:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.847 07:21:06 -- paths/export.sh@5 -- # export PATH 00:15:05.847 07:21:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.847 07:21:06 -- nvmf/common.sh@46 -- # : 0 00:15:05.847 07:21:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:05.847 07:21:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:05.847 07:21:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:05.847 07:21:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:05.847 07:21:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:05.847 07:21:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:05.847 07:21:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:05.847 07:21:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:05.847 07:21:06 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:05.847 07:21:06 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:05.847 07:21:06 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:05.847 07:21:06 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:05.847 07:21:06 -- target/multipath.sh@43 -- # nvmftestinit 00:15:05.847 07:21:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:05.847 07:21:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:05.847 07:21:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:05.847 07:21:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:05.847 07:21:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:05.847 07:21:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.847 07:21:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:05.847 07:21:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.847 07:21:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:05.847 07:21:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:05.847 07:21:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:05.847 07:21:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:05.847 07:21:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:05.847 07:21:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:05.847 07:21:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:05.847 07:21:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:05.847 07:21:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:05.847 07:21:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:05.847 07:21:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:05.847 07:21:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:05.847 07:21:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:05.847 07:21:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:05.847 07:21:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:05.847 07:21:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:05.847 07:21:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:05.847 07:21:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:05.847 07:21:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:05.847 07:21:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:05.847 Cannot find device "nvmf_tgt_br" 00:15:05.847 07:21:06 -- nvmf/common.sh@154 -- # true 00:15:05.847 07:21:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:05.847 Cannot find device "nvmf_tgt_br2" 00:15:05.847 07:21:06 -- nvmf/common.sh@155 -- # true 00:15:05.847 07:21:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:05.847 07:21:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:05.847 Cannot find device "nvmf_tgt_br" 00:15:05.847 07:21:06 -- nvmf/common.sh@157 -- # true 00:15:05.847 07:21:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:05.847 Cannot find device "nvmf_tgt_br2" 00:15:05.847 07:21:06 -- nvmf/common.sh@158 -- # true 00:15:05.847 07:21:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:05.847 07:21:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:05.847 07:21:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:05.847 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:05.847 07:21:06 -- nvmf/common.sh@161 -- # true 00:15:05.847 07:21:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:05.847 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:05.847 07:21:06 -- nvmf/common.sh@162 -- # true 00:15:05.847 07:21:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:05.847 07:21:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:05.847 07:21:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:05.847 07:21:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:05.847 07:21:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:05.847 07:21:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:05.847 07:21:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:05.847 07:21:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:05.847 07:21:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:05.847 07:21:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:05.847 07:21:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:05.847 07:21:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:05.847 07:21:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:05.847 07:21:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:05.847 07:21:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:05.847 07:21:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:05.847 07:21:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:05.847 07:21:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:05.847 07:21:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:05.847 07:21:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:05.847 07:21:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:05.847 07:21:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:05.847 07:21:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:05.847 07:21:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:05.847 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:05.847 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:15:05.847 00:15:05.847 --- 10.0.0.2 ping statistics --- 00:15:05.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.847 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:15:05.847 07:21:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:05.847 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:05.847 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:15:05.847 00:15:05.847 --- 10.0.0.3 ping statistics --- 00:15:05.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.847 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:15:05.847 07:21:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:05.847 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:05.847 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:15:05.847 00:15:05.847 --- 10.0.0.1 ping statistics --- 00:15:05.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.847 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:15:05.847 07:21:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:05.847 07:21:06 -- nvmf/common.sh@421 -- # return 0 00:15:05.847 07:21:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:05.847 07:21:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:05.847 07:21:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:05.847 07:21:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:05.847 07:21:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:05.847 07:21:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:05.847 07:21:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:05.848 07:21:06 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:15:05.848 07:21:06 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:15:05.848 07:21:06 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:15:05.848 07:21:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:05.848 07:21:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:05.848 07:21:06 -- common/autotest_common.sh@10 -- # set +x 00:15:05.848 07:21:06 -- nvmf/common.sh@469 -- # nvmfpid=85318 00:15:05.848 07:21:06 -- nvmf/common.sh@470 -- # waitforlisten 85318 00:15:05.848 07:21:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:05.848 07:21:06 -- common/autotest_common.sh@819 -- # '[' -z 85318 ']' 00:15:05.848 07:21:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.848 07:21:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:05.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.848 07:21:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.848 07:21:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:05.848 07:21:06 -- common/autotest_common.sh@10 -- # set +x 00:15:05.848 [2024-11-04 07:21:06.755731] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:05.848 [2024-11-04 07:21:06.756367] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:05.848 [2024-11-04 07:21:06.895574] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:05.848 [2024-11-04 07:21:06.955969] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:05.848 [2024-11-04 07:21:06.956122] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:05.848 [2024-11-04 07:21:06.956135] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:05.848 [2024-11-04 07:21:06.956143] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:05.848 [2024-11-04 07:21:06.956284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:05.848 [2024-11-04 07:21:06.957446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:05.848 [2024-11-04 07:21:06.957611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:05.848 [2024-11-04 07:21:06.957616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.107 07:21:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:06.107 07:21:07 -- common/autotest_common.sh@852 -- # return 0 00:15:06.107 07:21:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:06.107 07:21:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:06.107 07:21:07 -- common/autotest_common.sh@10 -- # set +x 00:15:06.107 07:21:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:06.107 07:21:07 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:06.365 [2024-11-04 07:21:08.010429] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:06.365 07:21:08 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:06.624 Malloc0 00:15:06.624 07:21:08 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:15:06.884 07:21:08 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:07.143 07:21:08 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:07.143 [2024-11-04 07:21:08.971957] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:07.402 07:21:08 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:07.402 [2024-11-04 07:21:09.180165] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:07.402 07:21:09 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:15:07.661 07:21:09 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:15:07.918 07:21:09 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:15:07.918 07:21:09 -- common/autotest_common.sh@1177 -- # local i=0 00:15:07.918 07:21:09 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:07.918 07:21:09 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:15:07.918 07:21:09 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:09.829 07:21:11 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:09.829 07:21:11 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:09.829 07:21:11 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:09.829 07:21:11 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:15:09.829 07:21:11 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:09.829 07:21:11 -- common/autotest_common.sh@1187 -- # return 0 00:15:09.829 07:21:11 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:15:09.829 07:21:11 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:15:09.829 07:21:11 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:15:09.829 07:21:11 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:09.829 07:21:11 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:15:09.829 07:21:11 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:15:09.829 07:21:11 -- target/multipath.sh@38 -- # return 0 00:15:09.829 07:21:11 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:15:09.829 07:21:11 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:15:09.829 07:21:11 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:15:09.829 07:21:11 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:15:09.829 07:21:11 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:15:09.829 07:21:11 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:15:09.829 07:21:11 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:15:09.830 07:21:11 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:09.830 07:21:11 -- target/multipath.sh@22 -- # local timeout=20 00:15:09.830 07:21:11 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:09.830 07:21:11 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:09.830 07:21:11 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:09.830 07:21:11 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:15:09.830 07:21:11 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:09.830 07:21:11 -- target/multipath.sh@22 -- # local timeout=20 00:15:09.830 07:21:11 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:09.830 07:21:11 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:09.830 07:21:11 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:09.830 07:21:11 -- target/multipath.sh@85 -- # echo numa 00:15:09.830 07:21:11 -- target/multipath.sh@88 -- # fio_pid=85461 00:15:09.830 07:21:11 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:09.830 07:21:11 -- target/multipath.sh@90 -- # sleep 1 00:15:10.089 [global] 00:15:10.089 thread=1 00:15:10.089 invalidate=1 00:15:10.089 rw=randrw 00:15:10.089 time_based=1 00:15:10.089 runtime=6 00:15:10.089 ioengine=libaio 00:15:10.089 direct=1 00:15:10.089 bs=4096 00:15:10.089 iodepth=128 00:15:10.089 norandommap=0 00:15:10.089 numjobs=1 00:15:10.089 00:15:10.089 verify_dump=1 00:15:10.089 verify_backlog=512 00:15:10.089 verify_state_save=0 00:15:10.089 do_verify=1 00:15:10.089 verify=crc32c-intel 00:15:10.089 [job0] 00:15:10.089 filename=/dev/nvme0n1 00:15:10.089 Could not set queue depth (nvme0n1) 00:15:10.089 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:10.089 fio-3.35 00:15:10.089 Starting 1 thread 00:15:11.025 07:21:12 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:11.284 07:21:12 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:11.543 07:21:13 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:15:11.543 07:21:13 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:11.543 07:21:13 -- target/multipath.sh@22 -- # local timeout=20 00:15:11.543 07:21:13 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:11.543 07:21:13 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:11.543 07:21:13 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:11.543 07:21:13 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:15:11.543 07:21:13 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:11.543 07:21:13 -- target/multipath.sh@22 -- # local timeout=20 00:15:11.543 07:21:13 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:11.543 07:21:13 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:11.543 07:21:13 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:11.543 07:21:13 -- target/multipath.sh@25 -- # sleep 1s 00:15:12.480 07:21:14 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:12.480 07:21:14 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:12.480 07:21:14 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:12.480 07:21:14 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:12.739 07:21:14 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:12.998 07:21:14 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:15:12.998 07:21:14 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:12.998 07:21:14 -- target/multipath.sh@22 -- # local timeout=20 00:15:12.998 07:21:14 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:12.998 07:21:14 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:12.998 07:21:14 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:12.998 07:21:14 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:15:12.998 07:21:14 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:12.998 07:21:14 -- target/multipath.sh@22 -- # local timeout=20 00:15:12.998 07:21:14 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:12.998 07:21:14 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:12.998 07:21:14 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:12.998 07:21:14 -- target/multipath.sh@25 -- # sleep 1s 00:15:14.045 07:21:15 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:14.045 07:21:15 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:14.045 07:21:15 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:14.045 07:21:15 -- target/multipath.sh@104 -- # wait 85461 00:15:16.579 00:15:16.579 job0: (groupid=0, jobs=1): err= 0: pid=85482: Mon Nov 4 07:21:18 2024 00:15:16.579 read: IOPS=12.6k, BW=49.1MiB/s (51.5MB/s)(295MiB/6005msec) 00:15:16.579 slat (usec): min=3, max=7486, avg=44.99, stdev=201.31 00:15:16.579 clat (usec): min=836, max=15162, avg=6996.50, stdev=1144.24 00:15:16.579 lat (usec): min=871, max=15204, avg=7041.49, stdev=1151.90 00:15:16.579 clat percentiles (usec): 00:15:16.579 | 1.00th=[ 4228], 5.00th=[ 5342], 10.00th=[ 5800], 20.00th=[ 6194], 00:15:16.579 | 30.00th=[ 6390], 40.00th=[ 6652], 50.00th=[ 6915], 60.00th=[ 7177], 00:15:16.579 | 70.00th=[ 7504], 80.00th=[ 7767], 90.00th=[ 8291], 95.00th=[ 8979], 00:15:16.579 | 99.00th=[10421], 99.50th=[10945], 99.90th=[12125], 99.95th=[12780], 00:15:16.579 | 99.99th=[13698] 00:15:16.579 bw ( KiB/s): min=12040, max=32016, per=52.37%, avg=26350.55, stdev=6680.97, samples=11 00:15:16.579 iops : min= 3010, max= 8004, avg=6587.64, stdev=1670.24, samples=11 00:15:16.579 write: IOPS=7205, BW=28.1MiB/s (29.5MB/s)(149MiB/5309msec); 0 zone resets 00:15:16.579 slat (usec): min=12, max=2508, avg=57.08, stdev=133.42 00:15:16.579 clat (usec): min=388, max=12450, avg=6087.57, stdev=983.31 00:15:16.579 lat (usec): min=479, max=12480, avg=6144.66, stdev=985.65 00:15:16.579 clat percentiles (usec): 00:15:16.579 | 1.00th=[ 3359], 5.00th=[ 4293], 10.00th=[ 5014], 20.00th=[ 5473], 00:15:16.579 | 30.00th=[ 5735], 40.00th=[ 5932], 50.00th=[ 6128], 60.00th=[ 6325], 00:15:16.579 | 70.00th=[ 6521], 80.00th=[ 6718], 90.00th=[ 7046], 95.00th=[ 7373], 00:15:16.579 | 99.00th=[ 9241], 99.50th=[ 9896], 99.90th=[11207], 99.95th=[11600], 00:15:16.579 | 99.99th=[12387] 00:15:16.579 bw ( KiB/s): min=12576, max=31536, per=91.13%, avg=26267.64, stdev=6308.38, samples=11 00:15:16.579 iops : min= 3144, max= 7884, avg=6566.91, stdev=1577.09, samples=11 00:15:16.579 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:15:16.579 lat (msec) : 2=0.03%, 4=1.51%, 10=97.12%, 20=1.33% 00:15:16.579 cpu : usr=6.24%, sys=25.25%, ctx=7076, majf=0, minf=78 00:15:16.579 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:15:16.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:16.579 issued rwts: total=75542,38256,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.579 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.579 00:15:16.579 Run status group 0 (all jobs): 00:15:16.579 READ: bw=49.1MiB/s (51.5MB/s), 49.1MiB/s-49.1MiB/s (51.5MB/s-51.5MB/s), io=295MiB (309MB), run=6005-6005msec 00:15:16.579 WRITE: bw=28.1MiB/s (29.5MB/s), 28.1MiB/s-28.1MiB/s (29.5MB/s-29.5MB/s), io=149MiB (157MB), run=5309-5309msec 00:15:16.579 00:15:16.579 Disk stats (read/write): 00:15:16.579 nvme0n1: ios=74856/37237, merge=0/0, ticks=486164/209663, in_queue=695827, util=98.55% 00:15:16.579 07:21:18 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:16.579 07:21:18 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:16.838 07:21:18 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:15:16.838 07:21:18 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:16.838 07:21:18 -- target/multipath.sh@22 -- # local timeout=20 00:15:16.838 07:21:18 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:16.838 07:21:18 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:16.838 07:21:18 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:16.838 07:21:18 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:15:16.838 07:21:18 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:16.838 07:21:18 -- target/multipath.sh@22 -- # local timeout=20 00:15:16.838 07:21:18 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:16.838 07:21:18 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:16.838 07:21:18 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:15:16.838 07:21:18 -- target/multipath.sh@25 -- # sleep 1s 00:15:18.215 07:21:19 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:18.215 07:21:19 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:18.215 07:21:19 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:18.215 07:21:19 -- target/multipath.sh@113 -- # echo round-robin 00:15:18.215 07:21:19 -- target/multipath.sh@116 -- # fio_pid=85612 00:15:18.215 07:21:19 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:18.215 07:21:19 -- target/multipath.sh@118 -- # sleep 1 00:15:18.215 [global] 00:15:18.215 thread=1 00:15:18.215 invalidate=1 00:15:18.215 rw=randrw 00:15:18.215 time_based=1 00:15:18.215 runtime=6 00:15:18.215 ioengine=libaio 00:15:18.215 direct=1 00:15:18.215 bs=4096 00:15:18.215 iodepth=128 00:15:18.215 norandommap=0 00:15:18.215 numjobs=1 00:15:18.215 00:15:18.215 verify_dump=1 00:15:18.215 verify_backlog=512 00:15:18.215 verify_state_save=0 00:15:18.215 do_verify=1 00:15:18.215 verify=crc32c-intel 00:15:18.215 [job0] 00:15:18.215 filename=/dev/nvme0n1 00:15:18.215 Could not set queue depth (nvme0n1) 00:15:18.215 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:18.215 fio-3.35 00:15:18.215 Starting 1 thread 00:15:19.150 07:21:20 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:19.150 07:21:20 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:19.409 07:21:21 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:15:19.409 07:21:21 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:19.409 07:21:21 -- target/multipath.sh@22 -- # local timeout=20 00:15:19.409 07:21:21 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:19.409 07:21:21 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:19.409 07:21:21 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:19.409 07:21:21 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:15:19.409 07:21:21 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:19.409 07:21:21 -- target/multipath.sh@22 -- # local timeout=20 00:15:19.409 07:21:21 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:19.409 07:21:21 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:19.409 07:21:21 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:19.409 07:21:21 -- target/multipath.sh@25 -- # sleep 1s 00:15:20.786 07:21:22 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:20.786 07:21:22 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:20.786 07:21:22 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:20.786 07:21:22 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:20.786 07:21:22 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:21.045 07:21:22 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:15:21.045 07:21:22 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:21.045 07:21:22 -- target/multipath.sh@22 -- # local timeout=20 00:15:21.045 07:21:22 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:21.045 07:21:22 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:21.045 07:21:22 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:21.045 07:21:22 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:15:21.045 07:21:22 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:21.045 07:21:22 -- target/multipath.sh@22 -- # local timeout=20 00:15:21.045 07:21:22 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:21.045 07:21:22 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:21.045 07:21:22 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:21.045 07:21:22 -- target/multipath.sh@25 -- # sleep 1s 00:15:21.980 07:21:23 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:21.980 07:21:23 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:21.980 07:21:23 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:21.980 07:21:23 -- target/multipath.sh@132 -- # wait 85612 00:15:24.515 00:15:24.515 job0: (groupid=0, jobs=1): err= 0: pid=85639: Mon Nov 4 07:21:25 2024 00:15:24.515 read: IOPS=13.2k, BW=51.8MiB/s (54.3MB/s)(311MiB/6005msec) 00:15:24.515 slat (usec): min=2, max=7737, avg=38.06, stdev=179.78 00:15:24.515 clat (usec): min=321, max=43024, avg=6674.95, stdev=1651.70 00:15:24.515 lat (usec): min=331, max=43035, avg=6713.00, stdev=1659.81 00:15:24.515 clat percentiles (usec): 00:15:24.515 | 1.00th=[ 2540], 5.00th=[ 3818], 10.00th=[ 4621], 20.00th=[ 5669], 00:15:24.515 | 30.00th=[ 6128], 40.00th=[ 6390], 50.00th=[ 6652], 60.00th=[ 6915], 00:15:24.515 | 70.00th=[ 7308], 80.00th=[ 7701], 90.00th=[ 8455], 95.00th=[ 9503], 00:15:24.515 | 99.00th=[11469], 99.50th=[12387], 99.90th=[14353], 99.95th=[15270], 00:15:24.515 | 99.99th=[16909] 00:15:24.515 bw ( KiB/s): min=10280, max=35560, per=51.03%, avg=27042.18, stdev=7486.98, samples=11 00:15:24.515 iops : min= 2570, max= 8890, avg=6760.55, stdev=1871.75, samples=11 00:15:24.515 write: IOPS=7702, BW=30.1MiB/s (31.5MB/s)(161MiB/5351msec); 0 zone resets 00:15:24.515 slat (usec): min=11, max=2999, avg=49.26, stdev=110.83 00:15:24.515 clat (usec): min=405, max=14392, avg=5602.11, stdev=1469.55 00:15:24.515 lat (usec): min=488, max=14419, avg=5651.37, stdev=1474.86 00:15:24.515 clat percentiles (usec): 00:15:24.515 | 1.00th=[ 2114], 5.00th=[ 2933], 10.00th=[ 3458], 20.00th=[ 4424], 00:15:24.515 | 30.00th=[ 5145], 40.00th=[ 5538], 50.00th=[ 5800], 60.00th=[ 6063], 00:15:24.515 | 70.00th=[ 6259], 80.00th=[ 6521], 90.00th=[ 6980], 95.00th=[ 7701], 00:15:24.515 | 99.00th=[ 9765], 99.50th=[10290], 99.90th=[11863], 99.95th=[12911], 00:15:24.515 | 99.99th=[14353] 00:15:24.515 bw ( KiB/s): min=10704, max=36336, per=87.67%, avg=27010.91, stdev=7312.74, samples=11 00:15:24.515 iops : min= 2676, max= 9084, avg=6752.73, stdev=1828.19, samples=11 00:15:24.515 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.04% 00:15:24.515 lat (msec) : 2=0.48%, 4=8.75%, 10=88.18%, 20=2.52%, 50=0.01% 00:15:24.515 cpu : usr=6.78%, sys=26.38%, ctx=8039, majf=0, minf=102 00:15:24.515 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:15:24.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:24.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:24.515 issued rwts: total=79557,41214,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:24.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:24.515 00:15:24.515 Run status group 0 (all jobs): 00:15:24.515 READ: bw=51.8MiB/s (54.3MB/s), 51.8MiB/s-51.8MiB/s (54.3MB/s-54.3MB/s), io=311MiB (326MB), run=6005-6005msec 00:15:24.515 WRITE: bw=30.1MiB/s (31.5MB/s), 30.1MiB/s-30.1MiB/s (31.5MB/s-31.5MB/s), io=161MiB (169MB), run=5351-5351msec 00:15:24.515 00:15:24.515 Disk stats (read/write): 00:15:24.515 nvme0n1: ios=78615/40472, merge=0/0, ticks=485808/207954, in_queue=693762, util=98.55% 00:15:24.515 07:21:25 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:24.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:24.515 07:21:26 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:24.515 07:21:26 -- common/autotest_common.sh@1198 -- # local i=0 00:15:24.515 07:21:26 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:24.515 07:21:26 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:24.515 07:21:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:24.515 07:21:26 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:24.515 07:21:26 -- common/autotest_common.sh@1210 -- # return 0 00:15:24.515 07:21:26 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:24.774 07:21:26 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:15:24.774 07:21:26 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:15:24.774 07:21:26 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:15:24.774 07:21:26 -- target/multipath.sh@144 -- # nvmftestfini 00:15:24.774 07:21:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:24.774 07:21:26 -- nvmf/common.sh@116 -- # sync 00:15:24.774 07:21:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:24.774 07:21:26 -- nvmf/common.sh@119 -- # set +e 00:15:24.774 07:21:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:24.774 07:21:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:24.774 rmmod nvme_tcp 00:15:24.774 rmmod nvme_fabrics 00:15:24.774 rmmod nvme_keyring 00:15:25.032 07:21:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:25.032 07:21:26 -- nvmf/common.sh@123 -- # set -e 00:15:25.032 07:21:26 -- nvmf/common.sh@124 -- # return 0 00:15:25.032 07:21:26 -- nvmf/common.sh@477 -- # '[' -n 85318 ']' 00:15:25.032 07:21:26 -- nvmf/common.sh@478 -- # killprocess 85318 00:15:25.032 07:21:26 -- common/autotest_common.sh@926 -- # '[' -z 85318 ']' 00:15:25.032 07:21:26 -- common/autotest_common.sh@930 -- # kill -0 85318 00:15:25.032 07:21:26 -- common/autotest_common.sh@931 -- # uname 00:15:25.032 07:21:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:25.032 07:21:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85318 00:15:25.032 07:21:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:25.032 killing process with pid 85318 00:15:25.032 07:21:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:25.032 07:21:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85318' 00:15:25.032 07:21:26 -- common/autotest_common.sh@945 -- # kill 85318 00:15:25.032 07:21:26 -- common/autotest_common.sh@950 -- # wait 85318 00:15:25.291 07:21:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:25.291 07:21:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:25.291 07:21:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:25.291 07:21:26 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:25.291 07:21:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:25.291 07:21:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.291 07:21:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:25.291 07:21:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.291 07:21:26 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:25.291 00:15:25.291 real 0m20.778s 00:15:25.291 user 1m21.440s 00:15:25.291 sys 0m6.747s 00:15:25.291 07:21:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:25.291 ************************************ 00:15:25.291 END TEST nvmf_multipath 00:15:25.291 ************************************ 00:15:25.291 07:21:27 -- common/autotest_common.sh@10 -- # set +x 00:15:25.291 07:21:27 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:25.291 07:21:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:25.291 07:21:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:25.291 07:21:27 -- common/autotest_common.sh@10 -- # set +x 00:15:25.291 ************************************ 00:15:25.291 START TEST nvmf_zcopy 00:15:25.291 ************************************ 00:15:25.291 07:21:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:25.291 * Looking for test storage... 00:15:25.291 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:25.291 07:21:27 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:25.291 07:21:27 -- nvmf/common.sh@7 -- # uname -s 00:15:25.291 07:21:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:25.291 07:21:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:25.291 07:21:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:25.291 07:21:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:25.291 07:21:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:25.291 07:21:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:25.291 07:21:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:25.291 07:21:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:25.291 07:21:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:25.291 07:21:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:25.550 07:21:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:15:25.550 07:21:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:15:25.550 07:21:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:25.550 07:21:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:25.550 07:21:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:25.550 07:21:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:25.550 07:21:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:25.550 07:21:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:25.550 07:21:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:25.550 07:21:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.550 07:21:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.550 07:21:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.550 07:21:27 -- paths/export.sh@5 -- # export PATH 00:15:25.550 07:21:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.550 07:21:27 -- nvmf/common.sh@46 -- # : 0 00:15:25.550 07:21:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:25.550 07:21:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:25.550 07:21:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:25.550 07:21:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:25.550 07:21:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:25.550 07:21:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:25.550 07:21:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:25.550 07:21:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:25.550 07:21:27 -- target/zcopy.sh@12 -- # nvmftestinit 00:15:25.550 07:21:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:25.550 07:21:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:25.550 07:21:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:25.550 07:21:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:25.550 07:21:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:25.550 07:21:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.550 07:21:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:25.550 07:21:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.550 07:21:27 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:25.550 07:21:27 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:25.550 07:21:27 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:25.550 07:21:27 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:25.550 07:21:27 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:25.550 07:21:27 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:25.550 07:21:27 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:25.550 07:21:27 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:25.550 07:21:27 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:25.550 07:21:27 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:25.550 07:21:27 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:25.550 07:21:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:25.550 07:21:27 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:25.550 07:21:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:25.550 07:21:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:25.550 07:21:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:25.550 07:21:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:25.550 07:21:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:25.550 07:21:27 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:25.550 07:21:27 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:25.550 Cannot find device "nvmf_tgt_br" 00:15:25.551 07:21:27 -- nvmf/common.sh@154 -- # true 00:15:25.551 07:21:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:25.551 Cannot find device "nvmf_tgt_br2" 00:15:25.551 07:21:27 -- nvmf/common.sh@155 -- # true 00:15:25.551 07:21:27 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:25.551 07:21:27 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:25.551 Cannot find device "nvmf_tgt_br" 00:15:25.551 07:21:27 -- nvmf/common.sh@157 -- # true 00:15:25.551 07:21:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:25.551 Cannot find device "nvmf_tgt_br2" 00:15:25.551 07:21:27 -- nvmf/common.sh@158 -- # true 00:15:25.551 07:21:27 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:25.551 07:21:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:25.551 07:21:27 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:25.551 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:25.551 07:21:27 -- nvmf/common.sh@161 -- # true 00:15:25.551 07:21:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:25.551 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:25.551 07:21:27 -- nvmf/common.sh@162 -- # true 00:15:25.551 07:21:27 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:25.551 07:21:27 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:25.551 07:21:27 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:25.551 07:21:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:25.551 07:21:27 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:25.551 07:21:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:25.810 07:21:27 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:25.810 07:21:27 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:25.810 07:21:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:25.810 07:21:27 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:25.810 07:21:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:25.810 07:21:27 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:25.810 07:21:27 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:25.810 07:21:27 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:25.810 07:21:27 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:25.810 07:21:27 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:25.810 07:21:27 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:25.810 07:21:27 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:25.810 07:21:27 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:25.810 07:21:27 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:25.810 07:21:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:25.810 07:21:27 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:25.810 07:21:27 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:25.810 07:21:27 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:25.810 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:25.810 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:15:25.810 00:15:25.810 --- 10.0.0.2 ping statistics --- 00:15:25.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.810 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:15:25.810 07:21:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:25.810 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:25.810 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:15:25.810 00:15:25.810 --- 10.0.0.3 ping statistics --- 00:15:25.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.810 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:15:25.810 07:21:27 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:25.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:25.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:15:25.810 00:15:25.810 --- 10.0.0.1 ping statistics --- 00:15:25.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.810 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:15:25.810 07:21:27 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:25.810 07:21:27 -- nvmf/common.sh@421 -- # return 0 00:15:25.810 07:21:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:25.810 07:21:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:25.810 07:21:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:25.810 07:21:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:25.810 07:21:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:25.810 07:21:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:25.810 07:21:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:25.810 07:21:27 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:25.810 07:21:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:25.810 07:21:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:25.810 07:21:27 -- common/autotest_common.sh@10 -- # set +x 00:15:25.810 07:21:27 -- nvmf/common.sh@469 -- # nvmfpid=85914 00:15:25.810 07:21:27 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:25.810 07:21:27 -- nvmf/common.sh@470 -- # waitforlisten 85914 00:15:25.810 07:21:27 -- common/autotest_common.sh@819 -- # '[' -z 85914 ']' 00:15:25.810 07:21:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.810 07:21:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:25.810 07:21:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.810 07:21:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:25.810 07:21:27 -- common/autotest_common.sh@10 -- # set +x 00:15:25.810 [2024-11-04 07:21:27.617209] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:25.810 [2024-11-04 07:21:27.617293] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.068 [2024-11-04 07:21:27.756448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.068 [2024-11-04 07:21:27.829818] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:26.068 [2024-11-04 07:21:27.829984] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:26.068 [2024-11-04 07:21:27.829996] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:26.069 [2024-11-04 07:21:27.830005] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:26.069 [2024-11-04 07:21:27.830039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:27.002 07:21:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:27.002 07:21:28 -- common/autotest_common.sh@852 -- # return 0 00:15:27.002 07:21:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:27.002 07:21:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:27.002 07:21:28 -- common/autotest_common.sh@10 -- # set +x 00:15:27.002 07:21:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:27.002 07:21:28 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:27.002 07:21:28 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:27.002 07:21:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:27.002 07:21:28 -- common/autotest_common.sh@10 -- # set +x 00:15:27.002 [2024-11-04 07:21:28.663582] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:27.002 07:21:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:27.002 07:21:28 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:27.002 07:21:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:27.002 07:21:28 -- common/autotest_common.sh@10 -- # set +x 00:15:27.002 07:21:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:27.002 07:21:28 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:27.002 07:21:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:27.002 07:21:28 -- common/autotest_common.sh@10 -- # set +x 00:15:27.002 [2024-11-04 07:21:28.679746] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:27.002 07:21:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:27.002 07:21:28 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:27.002 07:21:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:27.002 07:21:28 -- common/autotest_common.sh@10 -- # set +x 00:15:27.002 07:21:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:27.002 07:21:28 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:27.002 07:21:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:27.002 07:21:28 -- common/autotest_common.sh@10 -- # set +x 00:15:27.002 malloc0 00:15:27.002 07:21:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:27.002 07:21:28 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:27.002 07:21:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:27.002 07:21:28 -- common/autotest_common.sh@10 -- # set +x 00:15:27.002 07:21:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:27.002 07:21:28 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:27.002 07:21:28 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:27.002 07:21:28 -- nvmf/common.sh@520 -- # config=() 00:15:27.002 07:21:28 -- nvmf/common.sh@520 -- # local subsystem config 00:15:27.002 07:21:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:27.002 07:21:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:27.002 { 00:15:27.002 "params": { 00:15:27.002 "name": "Nvme$subsystem", 00:15:27.002 "trtype": "$TEST_TRANSPORT", 00:15:27.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:27.002 "adrfam": "ipv4", 00:15:27.002 "trsvcid": "$NVMF_PORT", 00:15:27.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:27.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:27.002 "hdgst": ${hdgst:-false}, 00:15:27.002 "ddgst": ${ddgst:-false} 00:15:27.002 }, 00:15:27.002 "method": "bdev_nvme_attach_controller" 00:15:27.002 } 00:15:27.002 EOF 00:15:27.002 )") 00:15:27.002 07:21:28 -- nvmf/common.sh@542 -- # cat 00:15:27.002 07:21:28 -- nvmf/common.sh@544 -- # jq . 00:15:27.002 07:21:28 -- nvmf/common.sh@545 -- # IFS=, 00:15:27.002 07:21:28 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:27.002 "params": { 00:15:27.002 "name": "Nvme1", 00:15:27.002 "trtype": "tcp", 00:15:27.002 "traddr": "10.0.0.2", 00:15:27.002 "adrfam": "ipv4", 00:15:27.002 "trsvcid": "4420", 00:15:27.002 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:27.002 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:27.002 "hdgst": false, 00:15:27.002 "ddgst": false 00:15:27.002 }, 00:15:27.002 "method": "bdev_nvme_attach_controller" 00:15:27.002 }' 00:15:27.002 [2024-11-04 07:21:28.763730] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:27.002 [2024-11-04 07:21:28.763828] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85965 ] 00:15:27.260 [2024-11-04 07:21:28.901111] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.260 [2024-11-04 07:21:28.970470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.518 Running I/O for 10 seconds... 00:15:37.490 00:15:37.490 Latency(us) 00:15:37.490 [2024-11-04T07:21:39.331Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:37.490 [2024-11-04T07:21:39.331Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:37.490 Verification LBA range: start 0x0 length 0x1000 00:15:37.490 Nvme1n1 : 10.01 11106.79 86.77 0.00 0.00 11495.77 875.05 17754.30 00:15:37.490 [2024-11-04T07:21:39.331Z] =================================================================================================================== 00:15:37.490 [2024-11-04T07:21:39.331Z] Total : 11106.79 86.77 0.00 0.00 11495.77 875.05 17754.30 00:15:37.750 07:21:39 -- target/zcopy.sh@39 -- # perfpid=86083 00:15:37.750 07:21:39 -- target/zcopy.sh@41 -- # xtrace_disable 00:15:37.750 07:21:39 -- common/autotest_common.sh@10 -- # set +x 00:15:37.750 07:21:39 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:37.750 07:21:39 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:37.750 07:21:39 -- nvmf/common.sh@520 -- # config=() 00:15:37.750 07:21:39 -- nvmf/common.sh@520 -- # local subsystem config 00:15:37.750 07:21:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:37.750 07:21:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:37.750 { 00:15:37.750 "params": { 00:15:37.750 "name": "Nvme$subsystem", 00:15:37.750 "trtype": "$TEST_TRANSPORT", 00:15:37.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:37.750 "adrfam": "ipv4", 00:15:37.750 "trsvcid": "$NVMF_PORT", 00:15:37.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:37.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:37.750 "hdgst": ${hdgst:-false}, 00:15:37.750 "ddgst": ${ddgst:-false} 00:15:37.750 }, 00:15:37.750 "method": "bdev_nvme_attach_controller" 00:15:37.750 } 00:15:37.750 EOF 00:15:37.750 )") 00:15:37.750 07:21:39 -- nvmf/common.sh@542 -- # cat 00:15:37.750 07:21:39 -- nvmf/common.sh@544 -- # jq . 00:15:37.750 [2024-11-04 07:21:39.349906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.750 [2024-11-04 07:21:39.349966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.750 07:21:39 -- nvmf/common.sh@545 -- # IFS=, 00:15:37.750 07:21:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:37.750 "params": { 00:15:37.750 "name": "Nvme1", 00:15:37.750 "trtype": "tcp", 00:15:37.750 "traddr": "10.0.0.2", 00:15:37.750 "adrfam": "ipv4", 00:15:37.750 "trsvcid": "4420", 00:15:37.750 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:37.750 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:37.750 "hdgst": false, 00:15:37.750 "ddgst": false 00:15:37.750 }, 00:15:37.750 "method": "bdev_nvme_attach_controller" 00:15:37.750 }' 00:15:37.750 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.750 [2024-11-04 07:21:39.357848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.750 [2024-11-04 07:21:39.357887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.750 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.750 [2024-11-04 07:21:39.365847] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.750 [2024-11-04 07:21:39.365869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.750 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.750 [2024-11-04 07:21:39.373850] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.750 [2024-11-04 07:21:39.373882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.750 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.750 [2024-11-04 07:21:39.381851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.750 [2024-11-04 07:21:39.381882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.750 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.750 [2024-11-04 07:21:39.389853] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.750 [2024-11-04 07:21:39.389891] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.750 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.750 [2024-11-04 07:21:39.395953] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:37.750 [2024-11-04 07:21:39.396043] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86083 ] 00:15:37.750 [2024-11-04 07:21:39.397855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.750 [2024-11-04 07:21:39.397886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.750 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.750 [2024-11-04 07:21:39.405857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.750 [2024-11-04 07:21:39.405888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.750 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.750 [2024-11-04 07:21:39.413859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.750 [2024-11-04 07:21:39.413890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.750 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.750 [2024-11-04 07:21:39.421862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.750 [2024-11-04 07:21:39.421893] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.750 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.750 [2024-11-04 07:21:39.429860] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.750 [2024-11-04 07:21:39.429890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.750 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.750 [2024-11-04 07:21:39.437862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.750 [2024-11-04 07:21:39.437892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.750 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.750 [2024-11-04 07:21:39.445864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.750 [2024-11-04 07:21:39.445896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.750 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.750 [2024-11-04 07:21:39.453865] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.750 [2024-11-04 07:21:39.453897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.750 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.750 [2024-11-04 07:21:39.461867] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.750 [2024-11-04 07:21:39.461898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.750 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.750 [2024-11-04 07:21:39.469868] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.750 [2024-11-04 07:21:39.469900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.750 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.750 [2024-11-04 07:21:39.477869] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.750 [2024-11-04 07:21:39.477900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.750 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.750 [2024-11-04 07:21:39.485871] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.750 [2024-11-04 07:21:39.485903] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.751 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.751 [2024-11-04 07:21:39.493882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.751 [2024-11-04 07:21:39.493901] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.751 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.751 [2024-11-04 07:21:39.501891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.751 [2024-11-04 07:21:39.501913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.751 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.751 [2024-11-04 07:21:39.509884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.751 [2024-11-04 07:21:39.509907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.751 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.751 [2024-11-04 07:21:39.517903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.751 [2024-11-04 07:21:39.517926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.751 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.751 [2024-11-04 07:21:39.525888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.751 [2024-11-04 07:21:39.525907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.751 [2024-11-04 07:21:39.527497] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.751 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.751 [2024-11-04 07:21:39.533891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.751 [2024-11-04 07:21:39.533913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.751 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.751 [2024-11-04 07:21:39.541891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.751 [2024-11-04 07:21:39.541914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.751 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.751 [2024-11-04 07:21:39.549892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.751 [2024-11-04 07:21:39.549914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.751 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.751 [2024-11-04 07:21:39.557894] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.751 [2024-11-04 07:21:39.557916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.751 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.751 [2024-11-04 07:21:39.565896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.751 [2024-11-04 07:21:39.565918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.751 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.751 [2024-11-04 07:21:39.573898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.751 [2024-11-04 07:21:39.573920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.751 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.751 [2024-11-04 07:21:39.581898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.751 [2024-11-04 07:21:39.581918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.751 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.026 [2024-11-04 07:21:39.589900] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.026 [2024-11-04 07:21:39.589928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.026 [2024-11-04 07:21:39.590113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.026 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.026 [2024-11-04 07:21:39.601910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.026 [2024-11-04 07:21:39.601932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.026 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.026 [2024-11-04 07:21:39.609906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.026 [2024-11-04 07:21:39.609929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.026 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.026 [2024-11-04 07:21:39.617910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.026 [2024-11-04 07:21:39.617943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.026 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.026 [2024-11-04 07:21:39.625911] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.026 [2024-11-04 07:21:39.625934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.026 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.026 [2024-11-04 07:21:39.633915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.026 [2024-11-04 07:21:39.633937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.026 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.026 [2024-11-04 07:21:39.641918] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.026 [2024-11-04 07:21:39.641941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.027 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.027 [2024-11-04 07:21:39.649917] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.027 [2024-11-04 07:21:39.649939] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.027 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.027 [2024-11-04 07:21:39.657917] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.027 [2024-11-04 07:21:39.657939] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.027 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.027 [2024-11-04 07:21:39.665920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.027 [2024-11-04 07:21:39.665942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.027 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.027 [2024-11-04 07:21:39.673926] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.027 [2024-11-04 07:21:39.673949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.027 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.027 [2024-11-04 07:21:39.681931] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.027 [2024-11-04 07:21:39.681952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.027 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.027 [2024-11-04 07:21:39.689958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.027 [2024-11-04 07:21:39.689985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.027 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.027 [2024-11-04 07:21:39.697956] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.027 [2024-11-04 07:21:39.697983] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.027 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.027 [2024-11-04 07:21:39.705941] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.027 [2024-11-04 07:21:39.705966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.027 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.027 [2024-11-04 07:21:39.713954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.027 [2024-11-04 07:21:39.713981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.027 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.027 [2024-11-04 07:21:39.721944] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.027 [2024-11-04 07:21:39.721970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.027 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.027 [2024-11-04 07:21:39.729956] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.027 [2024-11-04 07:21:39.729981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.027 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.027 [2024-11-04 07:21:39.737957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.027 [2024-11-04 07:21:39.737982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.027 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.027 [2024-11-04 07:21:39.745962] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.027 [2024-11-04 07:21:39.745988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.027 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.027 [2024-11-04 07:21:39.753963] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.027 [2024-11-04 07:21:39.753990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.027 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.027 Running I/O for 5 seconds... 00:15:38.027 [2024-11-04 07:21:39.761959] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.027 [2024-11-04 07:21:39.761983] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.027 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.027 [2024-11-04 07:21:39.773072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.027 [2024-11-04 07:21:39.773113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.027 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.027 [2024-11-04 07:21:39.780473] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.027 [2024-11-04 07:21:39.780503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.027 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.027 [2024-11-04 07:21:39.791794] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.027 [2024-11-04 07:21:39.791825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.027 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.027 [2024-11-04 07:21:39.800165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.027 [2024-11-04 07:21:39.800205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.027 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.027 [2024-11-04 07:21:39.811083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.027 [2024-11-04 07:21:39.811112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.027 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.027 [2024-11-04 07:21:39.822465] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.027 [2024-11-04 07:21:39.822497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.027 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.027 [2024-11-04 07:21:39.829948] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.027 [2024-11-04 07:21:39.829977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.027 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.027 [2024-11-04 07:21:39.840894] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.027 [2024-11-04 07:21:39.840934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.027 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.027 [2024-11-04 07:21:39.855303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.027 [2024-11-04 07:21:39.855332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.027 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.288 [2024-11-04 07:21:39.866296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.288 [2024-11-04 07:21:39.866326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.288 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.288 [2024-11-04 07:21:39.881720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.288 [2024-11-04 07:21:39.881750] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.288 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.289 [2024-11-04 07:21:39.898079] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.289 [2024-11-04 07:21:39.898119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.289 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.289 [2024-11-04 07:21:39.914488] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.289 [2024-11-04 07:21:39.914518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.289 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.289 [2024-11-04 07:21:39.931029] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.289 [2024-11-04 07:21:39.931059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.289 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.289 [2024-11-04 07:21:39.947403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.289 [2024-11-04 07:21:39.947446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.289 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.289 [2024-11-04 07:21:39.957785] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.289 [2024-11-04 07:21:39.957814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.289 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.289 [2024-11-04 07:21:39.973916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.289 [2024-11-04 07:21:39.973956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.289 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.289 [2024-11-04 07:21:39.990167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.289 [2024-11-04 07:21:39.990207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.289 2024/11/04 07:21:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.289 [2024-11-04 07:21:40.007326] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.289 [2024-11-04 07:21:40.007368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.289 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.289 [2024-11-04 07:21:40.022704] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.289 [2024-11-04 07:21:40.022767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.289 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.289 [2024-11-04 07:21:40.033392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.289 [2024-11-04 07:21:40.033432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.289 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.289 [2024-11-04 07:21:40.042504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.289 [2024-11-04 07:21:40.042537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.289 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.289 [2024-11-04 07:21:40.051847] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.289 [2024-11-04 07:21:40.051888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.289 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.289 [2024-11-04 07:21:40.060567] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.289 [2024-11-04 07:21:40.060597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.289 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.289 [2024-11-04 07:21:40.069869] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.289 [2024-11-04 07:21:40.069907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.289 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.289 [2024-11-04 07:21:40.078525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.289 [2024-11-04 07:21:40.078557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.289 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.289 [2024-11-04 07:21:40.087349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.289 [2024-11-04 07:21:40.087378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.289 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.289 [2024-11-04 07:21:40.096313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.289 [2024-11-04 07:21:40.096343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.289 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.289 [2024-11-04 07:21:40.105467] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.289 [2024-11-04 07:21:40.105496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.289 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.289 [2024-11-04 07:21:40.114251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.289 [2024-11-04 07:21:40.114281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.289 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.289 [2024-11-04 07:21:40.123485] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.289 [2024-11-04 07:21:40.123515] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.289 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.549 [2024-11-04 07:21:40.135458] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.549 [2024-11-04 07:21:40.135488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.549 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.549 [2024-11-04 07:21:40.143402] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.549 [2024-11-04 07:21:40.143431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.549 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.549 [2024-11-04 07:21:40.154024] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.549 [2024-11-04 07:21:40.154054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.549 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.549 [2024-11-04 07:21:40.162540] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.549 [2024-11-04 07:21:40.162570] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.549 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.549 [2024-11-04 07:21:40.173102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.549 [2024-11-04 07:21:40.173132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.549 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.549 [2024-11-04 07:21:40.181158] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.549 [2024-11-04 07:21:40.181187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.549 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.549 [2024-11-04 07:21:40.190461] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.549 [2024-11-04 07:21:40.190490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.549 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.549 [2024-11-04 07:21:40.201013] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.549 [2024-11-04 07:21:40.201042] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.549 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.549 [2024-11-04 07:21:40.208923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.549 [2024-11-04 07:21:40.208951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.549 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.549 [2024-11-04 07:21:40.224895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.549 [2024-11-04 07:21:40.224924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.549 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.549 [2024-11-04 07:21:40.235603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.549 [2024-11-04 07:21:40.235634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.549 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.549 [2024-11-04 07:21:40.245030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.549 [2024-11-04 07:21:40.245060] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.549 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.549 [2024-11-04 07:21:40.252261] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.549 [2024-11-04 07:21:40.252291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.549 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.549 [2024-11-04 07:21:40.263610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.549 [2024-11-04 07:21:40.263641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.549 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.549 [2024-11-04 07:21:40.272563] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.549 [2024-11-04 07:21:40.272593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.549 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.549 [2024-11-04 07:21:40.281064] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.549 [2024-11-04 07:21:40.281093] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.549 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.549 [2024-11-04 07:21:40.289553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.549 [2024-11-04 07:21:40.289582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.549 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.549 [2024-11-04 07:21:40.298529] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.549 [2024-11-04 07:21:40.298559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.549 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.550 [2024-11-04 07:21:40.307147] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.550 [2024-11-04 07:21:40.307177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.550 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.550 [2024-11-04 07:21:40.315837] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.550 [2024-11-04 07:21:40.315866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.550 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.550 [2024-11-04 07:21:40.324339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.550 [2024-11-04 07:21:40.324368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.550 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.550 [2024-11-04 07:21:40.332985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.550 [2024-11-04 07:21:40.333026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.550 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.550 [2024-11-04 07:21:40.342022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.550 [2024-11-04 07:21:40.342051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.550 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.550 [2024-11-04 07:21:40.350697] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.550 [2024-11-04 07:21:40.350728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.550 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.550 [2024-11-04 07:21:40.359288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.550 [2024-11-04 07:21:40.359318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.550 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.550 [2024-11-04 07:21:40.367862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.550 [2024-11-04 07:21:40.367902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.550 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.550 [2024-11-04 07:21:40.376090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.550 [2024-11-04 07:21:40.376119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.550 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.550 [2024-11-04 07:21:40.385966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.550 [2024-11-04 07:21:40.385994] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.809 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.809 [2024-11-04 07:21:40.393463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.809 [2024-11-04 07:21:40.393502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.809 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.809 [2024-11-04 07:21:40.404772] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.809 [2024-11-04 07:21:40.404802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.810 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.810 [2024-11-04 07:21:40.412539] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.810 [2024-11-04 07:21:40.412568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.810 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.810 [2024-11-04 07:21:40.424177] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.810 [2024-11-04 07:21:40.424206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.810 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.810 [2024-11-04 07:21:40.435893] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.810 [2024-11-04 07:21:40.435921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.810 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.810 [2024-11-04 07:21:40.444317] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.810 [2024-11-04 07:21:40.444347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.810 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.810 [2024-11-04 07:21:40.453006] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.810 [2024-11-04 07:21:40.453035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.810 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.810 [2024-11-04 07:21:40.462243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.810 [2024-11-04 07:21:40.462272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.810 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.810 [2024-11-04 07:21:40.471264] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.810 [2024-11-04 07:21:40.471294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.810 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.810 [2024-11-04 07:21:40.480013] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.810 [2024-11-04 07:21:40.480041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.810 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.810 [2024-11-04 07:21:40.488977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.810 [2024-11-04 07:21:40.489006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.810 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.810 [2024-11-04 07:21:40.497840] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.810 [2024-11-04 07:21:40.497869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.810 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.810 [2024-11-04 07:21:40.506508] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.810 [2024-11-04 07:21:40.506538] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.810 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.810 [2024-11-04 07:21:40.515267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.810 [2024-11-04 07:21:40.515297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.810 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.810 [2024-11-04 07:21:40.523804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.810 [2024-11-04 07:21:40.523833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.810 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.810 [2024-11-04 07:21:40.532860] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.810 [2024-11-04 07:21:40.532898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.810 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.810 [2024-11-04 07:21:40.541724] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.810 [2024-11-04 07:21:40.541754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.810 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.810 [2024-11-04 07:21:40.550537] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.810 [2024-11-04 07:21:40.550566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.810 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.810 [2024-11-04 07:21:40.559436] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.810 [2024-11-04 07:21:40.559464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.810 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.810 [2024-11-04 07:21:40.568050] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.810 [2024-11-04 07:21:40.568091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.810 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.810 [2024-11-04 07:21:40.577635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.810 [2024-11-04 07:21:40.577663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.810 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.810 [2024-11-04 07:21:40.585784] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.810 [2024-11-04 07:21:40.585813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.810 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.810 [2024-11-04 07:21:40.596406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.810 [2024-11-04 07:21:40.596436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.810 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.810 [2024-11-04 07:21:40.605979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.810 [2024-11-04 07:21:40.606013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.810 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.810 [2024-11-04 07:21:40.622326] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.810 [2024-11-04 07:21:40.622356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.810 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.810 [2024-11-04 07:21:40.637145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.810 [2024-11-04 07:21:40.637175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.810 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.070 [2024-11-04 07:21:40.651439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.070 [2024-11-04 07:21:40.651470] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.070 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.070 [2024-11-04 07:21:40.660033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.070 [2024-11-04 07:21:40.660062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.070 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.070 [2024-11-04 07:21:40.674817] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.070 [2024-11-04 07:21:40.674847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.070 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.070 [2024-11-04 07:21:40.686263] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.070 [2024-11-04 07:21:40.686293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.070 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.070 [2024-11-04 07:21:40.695790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.070 [2024-11-04 07:21:40.695819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.070 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.070 [2024-11-04 07:21:40.710916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.070 [2024-11-04 07:21:40.710943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.070 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.070 [2024-11-04 07:21:40.722343] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.070 [2024-11-04 07:21:40.722374] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.070 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.070 [2024-11-04 07:21:40.738549] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.070 [2024-11-04 07:21:40.738578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.070 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.070 [2024-11-04 07:21:40.749401] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.070 [2024-11-04 07:21:40.749431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.070 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.070 [2024-11-04 07:21:40.758984] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.070 [2024-11-04 07:21:40.759014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.070 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.070 [2024-11-04 07:21:40.766663] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.070 [2024-11-04 07:21:40.766693] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.070 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.070 [2024-11-04 07:21:40.777689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.070 [2024-11-04 07:21:40.777717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.070 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.070 [2024-11-04 07:21:40.786214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.070 [2024-11-04 07:21:40.786242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.070 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.070 [2024-11-04 07:21:40.797350] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.070 [2024-11-04 07:21:40.797379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.070 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.070 [2024-11-04 07:21:40.805491] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.070 [2024-11-04 07:21:40.805520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.070 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.071 [2024-11-04 07:21:40.816280] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.071 [2024-11-04 07:21:40.816309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.071 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.071 [2024-11-04 07:21:40.824651] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.071 [2024-11-04 07:21:40.824680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.071 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.071 [2024-11-04 07:21:40.834915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.071 [2024-11-04 07:21:40.834952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.071 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.071 [2024-11-04 07:21:40.845542] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.071 [2024-11-04 07:21:40.845571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.071 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.071 [2024-11-04 07:21:40.853408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.071 [2024-11-04 07:21:40.853437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.071 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.071 [2024-11-04 07:21:40.864477] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.071 [2024-11-04 07:21:40.864507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.071 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.071 [2024-11-04 07:21:40.872711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.071 [2024-11-04 07:21:40.872739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.071 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.071 [2024-11-04 07:21:40.881061] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.071 [2024-11-04 07:21:40.881090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.071 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.071 [2024-11-04 07:21:40.892585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.071 [2024-11-04 07:21:40.892614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.071 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.071 [2024-11-04 07:21:40.899939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.071 [2024-11-04 07:21:40.899967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.071 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.331 [2024-11-04 07:21:40.911409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.331 [2024-11-04 07:21:40.911437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.331 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.331 [2024-11-04 07:21:40.919768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.331 [2024-11-04 07:21:40.919797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.331 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.331 [2024-11-04 07:21:40.931232] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.331 [2024-11-04 07:21:40.931261] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.331 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.331 [2024-11-04 07:21:40.946757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.331 [2024-11-04 07:21:40.946786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.331 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.331 [2024-11-04 07:21:40.957240] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.331 [2024-11-04 07:21:40.957268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.331 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.331 [2024-11-04 07:21:40.964817] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.331 [2024-11-04 07:21:40.964848] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.331 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.331 [2024-11-04 07:21:40.976130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.331 [2024-11-04 07:21:40.976161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.331 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.331 [2024-11-04 07:21:40.984623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.331 [2024-11-04 07:21:40.984652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.331 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.331 [2024-11-04 07:21:40.994610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.331 [2024-11-04 07:21:40.994640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.331 2024/11/04 07:21:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.331 [2024-11-04 07:21:41.003461] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.331 [2024-11-04 07:21:41.003493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.331 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.331 [2024-11-04 07:21:41.013724] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.331 [2024-11-04 07:21:41.013754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.331 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.331 [2024-11-04 07:21:41.021668] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.331 [2024-11-04 07:21:41.021698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.331 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.331 [2024-11-04 07:21:41.032015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.331 [2024-11-04 07:21:41.032044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.331 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.331 [2024-11-04 07:21:41.040275] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.331 [2024-11-04 07:21:41.040303] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.331 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.331 [2024-11-04 07:21:41.050670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.331 [2024-11-04 07:21:41.050708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.331 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.331 [2024-11-04 07:21:41.059216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.331 [2024-11-04 07:21:41.059260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.331 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.331 [2024-11-04 07:21:41.068028] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.331 [2024-11-04 07:21:41.068057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.331 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.331 [2024-11-04 07:21:41.076601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.331 [2024-11-04 07:21:41.076630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.331 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.331 [2024-11-04 07:21:41.085666] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.331 [2024-11-04 07:21:41.085705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.331 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.331 [2024-11-04 07:21:41.094207] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.331 [2024-11-04 07:21:41.094237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.331 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.331 [2024-11-04 07:21:41.103039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.331 [2024-11-04 07:21:41.103068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.331 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.331 [2024-11-04 07:21:41.111909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.331 [2024-11-04 07:21:41.111938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.331 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.332 [2024-11-04 07:21:41.120860] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.332 [2024-11-04 07:21:41.120910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.332 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.332 [2024-11-04 07:21:41.129889] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.332 [2024-11-04 07:21:41.129913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.332 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.332 [2024-11-04 07:21:41.138535] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.332 [2024-11-04 07:21:41.138564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.332 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.332 [2024-11-04 07:21:41.147520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.332 [2024-11-04 07:21:41.147549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.332 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.332 [2024-11-04 07:21:41.157021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.332 [2024-11-04 07:21:41.157050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.332 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.332 [2024-11-04 07:21:41.166430] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.332 [2024-11-04 07:21:41.166459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.332 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.591 [2024-11-04 07:21:41.177092] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.591 [2024-11-04 07:21:41.177132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.591 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.591 [2024-11-04 07:21:41.185906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.591 [2024-11-04 07:21:41.185935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.591 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.591 [2024-11-04 07:21:41.199050] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.591 [2024-11-04 07:21:41.199079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.591 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.591 [2024-11-04 07:21:41.207014] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.591 [2024-11-04 07:21:41.207043] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.591 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.591 [2024-11-04 07:21:41.217891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.591 [2024-11-04 07:21:41.217933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.592 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.592 [2024-11-04 07:21:41.226297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.592 [2024-11-04 07:21:41.226326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.592 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.592 [2024-11-04 07:21:41.237931] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.592 [2024-11-04 07:21:41.237960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.592 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.592 [2024-11-04 07:21:41.246372] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.592 [2024-11-04 07:21:41.246424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.592 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.592 [2024-11-04 07:21:41.256579] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.592 [2024-11-04 07:21:41.256609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.592 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.592 [2024-11-04 07:21:41.265222] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.592 [2024-11-04 07:21:41.265251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.592 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.592 [2024-11-04 07:21:41.276845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.592 [2024-11-04 07:21:41.276884] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.592 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.592 [2024-11-04 07:21:41.285228] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.592 [2024-11-04 07:21:41.285258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.592 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.592 [2024-11-04 07:21:41.294141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.592 [2024-11-04 07:21:41.294170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.592 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.592 [2024-11-04 07:21:41.303078] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.592 [2024-11-04 07:21:41.303108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.592 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.592 [2024-11-04 07:21:41.311766] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.592 [2024-11-04 07:21:41.311796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.592 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.592 [2024-11-04 07:21:41.321007] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.592 [2024-11-04 07:21:41.321049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.592 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.592 [2024-11-04 07:21:41.329575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.592 [2024-11-04 07:21:41.329604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.592 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.592 [2024-11-04 07:21:41.338260] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.592 [2024-11-04 07:21:41.338289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.592 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.592 [2024-11-04 07:21:41.347073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.592 [2024-11-04 07:21:41.347101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.592 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.592 [2024-11-04 07:21:41.356021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.592 [2024-11-04 07:21:41.356050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.592 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.592 [2024-11-04 07:21:41.365105] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.592 [2024-11-04 07:21:41.365134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.592 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.592 [2024-11-04 07:21:41.378450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.592 [2024-11-04 07:21:41.378480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.592 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.592 [2024-11-04 07:21:41.386542] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.592 [2024-11-04 07:21:41.386572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.592 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.592 [2024-11-04 07:21:41.397225] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.592 [2024-11-04 07:21:41.397260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.592 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.592 [2024-11-04 07:21:41.406025] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.592 [2024-11-04 07:21:41.406052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.592 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.592 [2024-11-04 07:21:41.415990] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.592 [2024-11-04 07:21:41.416018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.592 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.592 [2024-11-04 07:21:41.424071] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.592 [2024-11-04 07:21:41.424101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.592 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.852 [2024-11-04 07:21:41.434682] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.852 [2024-11-04 07:21:41.434724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.852 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.852 [2024-11-04 07:21:41.442860] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.852 [2024-11-04 07:21:41.442904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.852 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.852 [2024-11-04 07:21:41.452928] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.852 [2024-11-04 07:21:41.452958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.852 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.852 [2024-11-04 07:21:41.460941] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.852 [2024-11-04 07:21:41.460969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.852 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.852 [2024-11-04 07:21:41.471303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.852 [2024-11-04 07:21:41.471332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.852 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.852 [2024-11-04 07:21:41.479585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.852 [2024-11-04 07:21:41.479615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.852 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.852 [2024-11-04 07:21:41.488396] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.852 [2024-11-04 07:21:41.488426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.852 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.852 [2024-11-04 07:21:41.497333] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.852 [2024-11-04 07:21:41.497363] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.852 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.852 [2024-11-04 07:21:41.505686] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.852 [2024-11-04 07:21:41.505715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.852 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.852 [2024-11-04 07:21:41.514078] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.852 [2024-11-04 07:21:41.514107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.852 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.852 [2024-11-04 07:21:41.522935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.852 [2024-11-04 07:21:41.522964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.852 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.852 [2024-11-04 07:21:41.531559] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.852 [2024-11-04 07:21:41.531588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.852 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.852 [2024-11-04 07:21:41.540387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.852 [2024-11-04 07:21:41.540417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.852 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.852 [2024-11-04 07:21:41.549228] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.852 [2024-11-04 07:21:41.549257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.852 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.852 [2024-11-04 07:21:41.557770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.852 [2024-11-04 07:21:41.557799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.853 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.853 [2024-11-04 07:21:41.566836] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.853 [2024-11-04 07:21:41.566865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.853 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.853 [2024-11-04 07:21:41.575681] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.853 [2024-11-04 07:21:41.575709] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.853 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.853 [2024-11-04 07:21:41.584450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.853 [2024-11-04 07:21:41.584481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.853 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.853 [2024-11-04 07:21:41.594231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.853 [2024-11-04 07:21:41.594260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.853 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.853 [2024-11-04 07:21:41.602532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.853 [2024-11-04 07:21:41.602561] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.853 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.853 [2024-11-04 07:21:41.618138] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.853 [2024-11-04 07:21:41.618180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.853 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.853 [2024-11-04 07:21:41.629571] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.853 [2024-11-04 07:21:41.629601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.853 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.853 [2024-11-04 07:21:41.637659] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.853 [2024-11-04 07:21:41.637688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.853 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.853 [2024-11-04 07:21:41.648341] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.853 [2024-11-04 07:21:41.648370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.853 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.853 [2024-11-04 07:21:41.656365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.853 [2024-11-04 07:21:41.656393] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.853 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.853 [2024-11-04 07:21:41.668013] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.853 [2024-11-04 07:21:41.668042] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.853 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.853 [2024-11-04 07:21:41.679520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.853 [2024-11-04 07:21:41.679549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.853 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.853 [2024-11-04 07:21:41.687313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.853 [2024-11-04 07:21:41.687342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.853 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.113 [2024-11-04 07:21:41.699151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.113 [2024-11-04 07:21:41.699181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.113 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.113 [2024-11-04 07:21:41.710376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.113 [2024-11-04 07:21:41.710422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.113 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.113 [2024-11-04 07:21:41.719664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.113 [2024-11-04 07:21:41.719694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.113 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.113 [2024-11-04 07:21:41.726941] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.113 [2024-11-04 07:21:41.726971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.113 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.113 [2024-11-04 07:21:41.738152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.113 [2024-11-04 07:21:41.738181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.113 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.113 [2024-11-04 07:21:41.753759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.113 [2024-11-04 07:21:41.753789] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.113 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.113 [2024-11-04 07:21:41.770260] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.113 [2024-11-04 07:21:41.770290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.113 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.113 [2024-11-04 07:21:41.786744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.113 [2024-11-04 07:21:41.786773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.113 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.113 [2024-11-04 07:21:41.802722] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.113 [2024-11-04 07:21:41.802760] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.113 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.113 [2024-11-04 07:21:41.818789] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.113 [2024-11-04 07:21:41.818818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.113 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.113 [2024-11-04 07:21:41.830615] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.113 [2024-11-04 07:21:41.830645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.113 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.113 [2024-11-04 07:21:41.839164] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.114 [2024-11-04 07:21:41.839193] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.114 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.114 [2024-11-04 07:21:41.847888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.114 [2024-11-04 07:21:41.847916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.114 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.114 [2024-11-04 07:21:41.856439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.114 [2024-11-04 07:21:41.856468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.114 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.114 [2024-11-04 07:21:41.868839] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.114 [2024-11-04 07:21:41.868868] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.114 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.114 [2024-11-04 07:21:41.876705] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.114 [2024-11-04 07:21:41.876734] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.114 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.114 [2024-11-04 07:21:41.887774] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.114 [2024-11-04 07:21:41.887804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.114 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.114 [2024-11-04 07:21:41.897166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.114 [2024-11-04 07:21:41.897197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.114 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.114 [2024-11-04 07:21:41.905662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.114 [2024-11-04 07:21:41.905692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.114 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.114 [2024-11-04 07:21:41.916082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.114 [2024-11-04 07:21:41.916111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.114 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.114 [2024-11-04 07:21:41.924252] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.114 [2024-11-04 07:21:41.924281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.114 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.114 [2024-11-04 07:21:41.932680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.114 [2024-11-04 07:21:41.932709] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.114 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.114 [2024-11-04 07:21:41.944729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.114 [2024-11-04 07:21:41.944757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.114 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.374 [2024-11-04 07:21:41.953268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.374 [2024-11-04 07:21:41.953297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.374 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.374 [2024-11-04 07:21:41.962891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.374 [2024-11-04 07:21:41.962919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.374 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.374 [2024-11-04 07:21:41.970234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.374 [2024-11-04 07:21:41.970268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.374 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.374 [2024-11-04 07:21:41.981604] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.374 [2024-11-04 07:21:41.981642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.374 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.374 [2024-11-04 07:21:41.993383] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.374 [2024-11-04 07:21:41.993414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.374 2024/11/04 07:21:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.374 [2024-11-04 07:21:42.001392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.374 [2024-11-04 07:21:42.001423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.374 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.374 [2024-11-04 07:21:42.011658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.374 [2024-11-04 07:21:42.011688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.374 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.374 [2024-11-04 07:21:42.019257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.374 [2024-11-04 07:21:42.019287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.374 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.374 [2024-11-04 07:21:42.030497] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.374 [2024-11-04 07:21:42.030529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.374 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.374 [2024-11-04 07:21:42.038961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.374 [2024-11-04 07:21:42.038991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.374 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.374 [2024-11-04 07:21:42.047531] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.374 [2024-11-04 07:21:42.047561] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.374 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.374 [2024-11-04 07:21:42.056270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.374 [2024-11-04 07:21:42.056300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.374 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.374 [2024-11-04 07:21:42.065086] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.374 [2024-11-04 07:21:42.065116] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.374 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.374 [2024-11-04 07:21:42.073978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.374 [2024-11-04 07:21:42.074007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.374 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.374 [2024-11-04 07:21:42.082717] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.374 [2024-11-04 07:21:42.082747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.374 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.374 [2024-11-04 07:21:42.097320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.374 [2024-11-04 07:21:42.097351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.374 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.374 [2024-11-04 07:21:42.114454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.374 [2024-11-04 07:21:42.114485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.374 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.374 [2024-11-04 07:21:42.130370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.374 [2024-11-04 07:21:42.130410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.374 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.374 [2024-11-04 07:21:42.146280] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.374 [2024-11-04 07:21:42.146310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.374 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.374 [2024-11-04 07:21:42.162054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.374 [2024-11-04 07:21:42.162084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.374 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.374 [2024-11-04 07:21:42.175812] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.374 [2024-11-04 07:21:42.175854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.374 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.374 [2024-11-04 07:21:42.184087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.374 [2024-11-04 07:21:42.184115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.374 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.374 [2024-11-04 07:21:42.200104] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.374 [2024-11-04 07:21:42.200133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.374 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.374 [2024-11-04 07:21:42.210689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.374 [2024-11-04 07:21:42.210724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.634 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.634 [2024-11-04 07:21:42.218933] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.634 [2024-11-04 07:21:42.218962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.634 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.634 [2024-11-04 07:21:42.229508] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.634 [2024-11-04 07:21:42.229537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.634 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.634 [2024-11-04 07:21:42.237796] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.634 [2024-11-04 07:21:42.237825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.634 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.634 [2024-11-04 07:21:42.248201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.634 [2024-11-04 07:21:42.248230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.634 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.634 [2024-11-04 07:21:42.256060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.634 [2024-11-04 07:21:42.256089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.634 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.634 [2024-11-04 07:21:42.266726] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.634 [2024-11-04 07:21:42.266755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.634 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.634 [2024-11-04 07:21:42.274645] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.634 [2024-11-04 07:21:42.274676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.634 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.634 [2024-11-04 07:21:42.285417] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.634 [2024-11-04 07:21:42.285446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.634 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.634 [2024-11-04 07:21:42.293662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.634 [2024-11-04 07:21:42.293691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.634 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.635 [2024-11-04 07:21:42.304369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.635 [2024-11-04 07:21:42.304399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.635 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.635 [2024-11-04 07:21:42.313051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.635 [2024-11-04 07:21:42.313081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.635 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.635 [2024-11-04 07:21:42.321675] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.635 [2024-11-04 07:21:42.321705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.635 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.635 [2024-11-04 07:21:42.330532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.635 [2024-11-04 07:21:42.330563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.635 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.635 [2024-11-04 07:21:42.339069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.635 [2024-11-04 07:21:42.339111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.635 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.635 [2024-11-04 07:21:42.347825] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.635 [2024-11-04 07:21:42.347855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.635 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.635 [2024-11-04 07:21:42.356552] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.635 [2024-11-04 07:21:42.356582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.635 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.635 [2024-11-04 07:21:42.365534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.635 [2024-11-04 07:21:42.365564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.635 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.635 [2024-11-04 07:21:42.374510] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.635 [2024-11-04 07:21:42.374541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.635 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.635 [2024-11-04 07:21:42.383602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.635 [2024-11-04 07:21:42.383632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.635 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.635 [2024-11-04 07:21:42.392469] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.635 [2024-11-04 07:21:42.392499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.635 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.635 [2024-11-04 07:21:42.401262] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.635 [2024-11-04 07:21:42.401293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.635 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.635 [2024-11-04 07:21:42.409813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.635 [2024-11-04 07:21:42.409842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.635 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.635 [2024-11-04 07:21:42.418312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.635 [2024-11-04 07:21:42.418343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.635 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.635 [2024-11-04 07:21:42.427253] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.635 [2024-11-04 07:21:42.427283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.635 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.635 [2024-11-04 07:21:42.436256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.635 [2024-11-04 07:21:42.436286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.635 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.635 [2024-11-04 07:21:42.444934] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.635 [2024-11-04 07:21:42.444963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.635 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.635 [2024-11-04 07:21:42.453452] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.635 [2024-11-04 07:21:42.453481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.635 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.635 [2024-11-04 07:21:42.462201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.635 [2024-11-04 07:21:42.462230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.635 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.635 [2024-11-04 07:21:42.470791] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.635 [2024-11-04 07:21:42.470820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.635 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.894 [2024-11-04 07:21:42.480577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.894 [2024-11-04 07:21:42.480606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.895 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.895 [2024-11-04 07:21:42.488535] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.895 [2024-11-04 07:21:42.488564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.895 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.895 [2024-11-04 07:21:42.499471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.895 [2024-11-04 07:21:42.499511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.895 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.895 [2024-11-04 07:21:42.507782] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.895 [2024-11-04 07:21:42.507813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.895 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.895 [2024-11-04 07:21:42.518946] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.895 [2024-11-04 07:21:42.518987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.895 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.895 [2024-11-04 07:21:42.529405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.895 [2024-11-04 07:21:42.529436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.895 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.895 [2024-11-04 07:21:42.536735] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.895 [2024-11-04 07:21:42.536765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.895 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.895 [2024-11-04 07:21:42.547731] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.895 [2024-11-04 07:21:42.547760] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.895 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.895 [2024-11-04 07:21:42.555916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.895 [2024-11-04 07:21:42.555945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.895 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.895 [2024-11-04 07:21:42.566344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.895 [2024-11-04 07:21:42.566373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.895 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.895 [2024-11-04 07:21:42.574099] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.895 [2024-11-04 07:21:42.574129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.895 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.895 [2024-11-04 07:21:42.585812] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.895 [2024-11-04 07:21:42.585843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.895 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.895 [2024-11-04 07:21:42.597494] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.895 [2024-11-04 07:21:42.597525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.895 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.895 [2024-11-04 07:21:42.605680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.895 [2024-11-04 07:21:42.605711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.895 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.895 [2024-11-04 07:21:42.616412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.895 [2024-11-04 07:21:42.616443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.895 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.895 [2024-11-04 07:21:42.624687] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.895 [2024-11-04 07:21:42.624719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.895 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.895 [2024-11-04 07:21:42.635130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.895 [2024-11-04 07:21:42.635160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.895 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.895 [2024-11-04 07:21:42.643296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.895 [2024-11-04 07:21:42.643325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.895 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.895 [2024-11-04 07:21:42.654674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.895 [2024-11-04 07:21:42.654704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.895 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.895 [2024-11-04 07:21:42.665628] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.895 [2024-11-04 07:21:42.665657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.895 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.895 [2024-11-04 07:21:42.672833] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.895 [2024-11-04 07:21:42.672863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.895 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.895 [2024-11-04 07:21:42.683959] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.895 [2024-11-04 07:21:42.683988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.895 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.895 [2024-11-04 07:21:42.692219] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.895 [2024-11-04 07:21:42.692249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.895 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.895 [2024-11-04 07:21:42.702141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.895 [2024-11-04 07:21:42.702171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.895 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.895 [2024-11-04 07:21:42.710076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.895 [2024-11-04 07:21:42.710106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.895 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.895 [2024-11-04 07:21:42.720578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.895 [2024-11-04 07:21:42.720608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.895 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.895 [2024-11-04 07:21:42.728723] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.895 [2024-11-04 07:21:42.728753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.895 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.154 [2024-11-04 07:21:42.737659] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.154 [2024-11-04 07:21:42.737699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.154 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.155 [2024-11-04 07:21:42.746938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.155 [2024-11-04 07:21:42.746966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.155 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.155 [2024-11-04 07:21:42.755831] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.155 [2024-11-04 07:21:42.755861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.155 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.155 [2024-11-04 07:21:42.764489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.155 [2024-11-04 07:21:42.764519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.155 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.155 [2024-11-04 07:21:42.773169] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.155 [2024-11-04 07:21:42.773199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.155 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.155 [2024-11-04 07:21:42.786670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.155 [2024-11-04 07:21:42.786701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.155 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.155 [2024-11-04 07:21:42.796004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.155 [2024-11-04 07:21:42.796034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.155 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.155 [2024-11-04 07:21:42.803403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.155 [2024-11-04 07:21:42.803433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.155 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.155 [2024-11-04 07:21:42.814552] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.155 [2024-11-04 07:21:42.814582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.155 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.155 [2024-11-04 07:21:42.822801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.155 [2024-11-04 07:21:42.822831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.155 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.155 [2024-11-04 07:21:42.833098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.155 [2024-11-04 07:21:42.833130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.155 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.155 [2024-11-04 07:21:42.842757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.155 [2024-11-04 07:21:42.842787] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.155 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.155 [2024-11-04 07:21:42.849780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.155 [2024-11-04 07:21:42.849809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.155 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.155 [2024-11-04 07:21:42.861186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.155 [2024-11-04 07:21:42.861216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.155 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.155 [2024-11-04 07:21:42.869299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.155 [2024-11-04 07:21:42.869329] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.155 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.155 [2024-11-04 07:21:42.879363] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.155 [2024-11-04 07:21:42.879392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.155 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.155 [2024-11-04 07:21:42.886905] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.155 [2024-11-04 07:21:42.886932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.155 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.155 [2024-11-04 07:21:42.902976] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.155 [2024-11-04 07:21:42.903005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.155 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.155 [2024-11-04 07:21:42.913439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.155 [2024-11-04 07:21:42.913469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.155 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.155 [2024-11-04 07:21:42.929952] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.155 [2024-11-04 07:21:42.929981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.155 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.155 [2024-11-04 07:21:42.940426] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.155 [2024-11-04 07:21:42.940456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.155 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.155 [2024-11-04 07:21:42.956391] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.155 [2024-11-04 07:21:42.956421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.155 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.155 [2024-11-04 07:21:42.966967] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.155 [2024-11-04 07:21:42.966996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.155 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.155 [2024-11-04 07:21:42.974902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.155 [2024-11-04 07:21:42.974930] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.155 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.155 [2024-11-04 07:21:42.985672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.155 [2024-11-04 07:21:42.985702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.155 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.415 [2024-11-04 07:21:42.994115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.415 [2024-11-04 07:21:42.994145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.415 2024/11/04 07:21:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.415 [2024-11-04 07:21:43.002956] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.415 [2024-11-04 07:21:43.002985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.415 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.415 [2024-11-04 07:21:43.011965] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.415 [2024-11-04 07:21:43.011994] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.415 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.415 [2024-11-04 07:21:43.020566] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.415 [2024-11-04 07:21:43.020595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.415 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.415 [2024-11-04 07:21:43.029413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.415 [2024-11-04 07:21:43.029443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.415 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.415 [2024-11-04 07:21:43.037860] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.415 [2024-11-04 07:21:43.037899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.415 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.415 [2024-11-04 07:21:43.046698] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.415 [2024-11-04 07:21:43.046729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.415 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.415 [2024-11-04 07:21:43.055425] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.415 [2024-11-04 07:21:43.055455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.415 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.415 [2024-11-04 07:21:43.064601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.415 [2024-11-04 07:21:43.064630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.415 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.415 [2024-11-04 07:21:43.073686] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.415 [2024-11-04 07:21:43.073716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.415 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.415 [2024-11-04 07:21:43.082422] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.415 [2024-11-04 07:21:43.082452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.415 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.415 [2024-11-04 07:21:43.091054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.415 [2024-11-04 07:21:43.091083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.415 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.415 [2024-11-04 07:21:43.099916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.415 [2024-11-04 07:21:43.099945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.415 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.415 [2024-11-04 07:21:43.108739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.415 [2024-11-04 07:21:43.108769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.415 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.415 [2024-11-04 07:21:43.117598] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.415 [2024-11-04 07:21:43.117627] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.415 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.415 [2024-11-04 07:21:43.126062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.415 [2024-11-04 07:21:43.126090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.415 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.415 [2024-11-04 07:21:43.134952] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.415 [2024-11-04 07:21:43.134980] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.415 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.415 [2024-11-04 07:21:43.143765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.415 [2024-11-04 07:21:43.143795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.415 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.415 [2024-11-04 07:21:43.152452] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.415 [2024-11-04 07:21:43.152482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.415 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.415 [2024-11-04 07:21:43.161025] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.415 [2024-11-04 07:21:43.161066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.415 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.415 [2024-11-04 07:21:43.169976] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.415 [2024-11-04 07:21:43.170004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.415 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.415 [2024-11-04 07:21:43.178519] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.415 [2024-11-04 07:21:43.178548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.415 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.415 [2024-11-04 07:21:43.187297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.415 [2024-11-04 07:21:43.187328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.415 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.415 [2024-11-04 07:21:43.195929] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.415 [2024-11-04 07:21:43.195957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.415 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.415 [2024-11-04 07:21:43.204684] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.415 [2024-11-04 07:21:43.204713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.415 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.415 [2024-11-04 07:21:43.213482] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.415 [2024-11-04 07:21:43.213511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.416 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.416 [2024-11-04 07:21:43.221918] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.416 [2024-11-04 07:21:43.221947] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.416 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.416 [2024-11-04 07:21:43.231100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.416 [2024-11-04 07:21:43.231129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.416 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.416 [2024-11-04 07:21:43.239419] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.416 [2024-11-04 07:21:43.239448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.416 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.416 [2024-11-04 07:21:43.248484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.416 [2024-11-04 07:21:43.248514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.416 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.675 [2024-11-04 07:21:43.257701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.675 [2024-11-04 07:21:43.257744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.675 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.675 [2024-11-04 07:21:43.266613] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.675 [2024-11-04 07:21:43.266643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.675 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.675 [2024-11-04 07:21:43.275661] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.675 [2024-11-04 07:21:43.275691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.675 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.675 [2024-11-04 07:21:43.284322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.675 [2024-11-04 07:21:43.284352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.675 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.675 [2024-11-04 07:21:43.293042] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.675 [2024-11-04 07:21:43.293071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.675 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.675 [2024-11-04 07:21:43.301788] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.675 [2024-11-04 07:21:43.301818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.675 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.675 [2024-11-04 07:21:43.315694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.675 [2024-11-04 07:21:43.315725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.675 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.675 [2024-11-04 07:21:43.324245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.675 [2024-11-04 07:21:43.324283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.675 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.676 [2024-11-04 07:21:43.333175] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.676 [2024-11-04 07:21:43.333206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.676 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.676 [2024-11-04 07:21:43.341688] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.676 [2024-11-04 07:21:43.341716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.676 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.676 [2024-11-04 07:21:43.350512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.676 [2024-11-04 07:21:43.350542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.676 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.676 [2024-11-04 07:21:43.359451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.676 [2024-11-04 07:21:43.359482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.676 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.676 [2024-11-04 07:21:43.368151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.676 [2024-11-04 07:21:43.368181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.676 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.676 [2024-11-04 07:21:43.377041] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.676 [2024-11-04 07:21:43.377070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.676 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.676 [2024-11-04 07:21:43.386002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.676 [2024-11-04 07:21:43.386031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.676 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.676 [2024-11-04 07:21:43.394411] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.676 [2024-11-04 07:21:43.394440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.676 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.676 [2024-11-04 07:21:43.403397] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.676 [2024-11-04 07:21:43.403428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.676 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.676 [2024-11-04 07:21:43.412441] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.676 [2024-11-04 07:21:43.412471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.676 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.676 [2024-11-04 07:21:43.421085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.676 [2024-11-04 07:21:43.421114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.676 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.676 [2024-11-04 07:21:43.429726] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.676 [2024-11-04 07:21:43.429755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.676 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.676 [2024-11-04 07:21:43.438682] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.676 [2024-11-04 07:21:43.438720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.676 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.676 [2024-11-04 07:21:43.447472] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.676 [2024-11-04 07:21:43.447502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.676 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.676 [2024-11-04 07:21:43.456152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.676 [2024-11-04 07:21:43.456189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.676 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.676 [2024-11-04 07:21:43.464985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.676 [2024-11-04 07:21:43.465014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.676 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.676 [2024-11-04 07:21:43.473806] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.676 [2024-11-04 07:21:43.473835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.676 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.676 [2024-11-04 07:21:43.483891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.676 [2024-11-04 07:21:43.483920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.676 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.676 [2024-11-04 07:21:43.494015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.676 [2024-11-04 07:21:43.494044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.676 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.676 [2024-11-04 07:21:43.509900] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.676 [2024-11-04 07:21:43.509930] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.676 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.936 [2024-11-04 07:21:43.526193] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.936 [2024-11-04 07:21:43.526236] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.936 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.936 [2024-11-04 07:21:43.542524] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.936 [2024-11-04 07:21:43.542555] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.936 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.936 [2024-11-04 07:21:43.559120] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.936 [2024-11-04 07:21:43.559151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.936 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.936 [2024-11-04 07:21:43.575020] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.936 [2024-11-04 07:21:43.575050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.936 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.936 [2024-11-04 07:21:43.586795] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.936 [2024-11-04 07:21:43.586825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.936 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.936 [2024-11-04 07:21:43.602635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.936 [2024-11-04 07:21:43.602666] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.936 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.936 [2024-11-04 07:21:43.618159] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.936 [2024-11-04 07:21:43.618189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.936 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.936 [2024-11-04 07:21:43.634833] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.936 [2024-11-04 07:21:43.634863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.936 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.936 [2024-11-04 07:21:43.650795] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.936 [2024-11-04 07:21:43.650826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.936 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.936 [2024-11-04 07:21:43.661386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.936 [2024-11-04 07:21:43.661416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.936 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.936 [2024-11-04 07:21:43.677265] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.936 [2024-11-04 07:21:43.677297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.936 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.936 [2024-11-04 07:21:43.687294] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.936 [2024-11-04 07:21:43.687324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.936 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.936 [2024-11-04 07:21:43.703864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.936 [2024-11-04 07:21:43.703907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.936 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.936 [2024-11-04 07:21:43.714530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.936 [2024-11-04 07:21:43.714559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.936 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.936 [2024-11-04 07:21:43.722780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.936 [2024-11-04 07:21:43.722810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.936 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.936 [2024-11-04 07:21:43.732570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.936 [2024-11-04 07:21:43.732600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.936 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.936 [2024-11-04 07:21:43.740338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.936 [2024-11-04 07:21:43.740367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.936 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.936 [2024-11-04 07:21:43.750987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.936 [2024-11-04 07:21:43.751015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.936 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.936 [2024-11-04 07:21:43.758921] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.936 [2024-11-04 07:21:43.758950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.936 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.936 [2024-11-04 07:21:43.769513] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.936 [2024-11-04 07:21:43.769544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.936 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.196 [2024-11-04 07:21:43.777672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.196 [2024-11-04 07:21:43.777703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.196 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.196 [2024-11-04 07:21:43.788320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.196 [2024-11-04 07:21:43.788350] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.196 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.196 [2024-11-04 07:21:43.797653] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.196 [2024-11-04 07:21:43.797683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.196 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.196 [2024-11-04 07:21:43.805258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.196 [2024-11-04 07:21:43.805287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.196 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.196 [2024-11-04 07:21:43.816383] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.196 [2024-11-04 07:21:43.816413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.196 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.196 [2024-11-04 07:21:43.824271] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.196 [2024-11-04 07:21:43.824300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.196 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.196 [2024-11-04 07:21:43.839965] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.196 [2024-11-04 07:21:43.839994] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.196 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.196 [2024-11-04 07:21:43.851005] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.196 [2024-11-04 07:21:43.851034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.196 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.196 [2024-11-04 07:21:43.866422] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.196 [2024-11-04 07:21:43.866452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.196 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.196 [2024-11-04 07:21:43.876882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.196 [2024-11-04 07:21:43.876913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.196 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.196 [2024-11-04 07:21:43.892260] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.196 [2024-11-04 07:21:43.892290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.196 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.196 [2024-11-04 07:21:43.902800] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.196 [2024-11-04 07:21:43.902830] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.196 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.196 [2024-11-04 07:21:43.909988] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.196 [2024-11-04 07:21:43.910017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.197 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.197 [2024-11-04 07:21:43.921181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.197 [2024-11-04 07:21:43.921211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.197 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.197 [2024-11-04 07:21:43.929003] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.197 [2024-11-04 07:21:43.929032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.197 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.197 [2024-11-04 07:21:43.939675] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.197 [2024-11-04 07:21:43.939705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.197 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.197 [2024-11-04 07:21:43.947951] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.197 [2024-11-04 07:21:43.947978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.197 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.197 [2024-11-04 07:21:43.957336] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.197 [2024-11-04 07:21:43.957366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.197 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.197 [2024-11-04 07:21:43.966029] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.197 [2024-11-04 07:21:43.966059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.197 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.197 [2024-11-04 07:21:43.974734] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.197 [2024-11-04 07:21:43.974764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.197 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.197 [2024-11-04 07:21:43.983758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.197 [2024-11-04 07:21:43.983789] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.197 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.197 [2024-11-04 07:21:43.992534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.197 [2024-11-04 07:21:43.992564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.197 2024/11/04 07:21:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.197 [2024-11-04 07:21:44.001370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.197 [2024-11-04 07:21:44.001399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.197 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.197 [2024-11-04 07:21:44.010163] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.197 [2024-11-04 07:21:44.010192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.197 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.197 [2024-11-04 07:21:44.018648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.197 [2024-11-04 07:21:44.018679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.197 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.197 [2024-11-04 07:21:44.027207] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.197 [2024-11-04 07:21:44.027237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.197 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.456 [2024-11-04 07:21:44.035991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.457 [2024-11-04 07:21:44.036020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.457 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.457 [2024-11-04 07:21:44.045058] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.457 [2024-11-04 07:21:44.045087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.457 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.457 [2024-11-04 07:21:44.054127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.457 [2024-11-04 07:21:44.054157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.457 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.457 [2024-11-04 07:21:44.062950] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.457 [2024-11-04 07:21:44.062979] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.457 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.457 [2024-11-04 07:21:44.071537] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.457 [2024-11-04 07:21:44.071566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.457 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.457 [2024-11-04 07:21:44.080209] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.457 [2024-11-04 07:21:44.080240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.457 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.457 [2024-11-04 07:21:44.088961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.457 [2024-11-04 07:21:44.088991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.457 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.457 [2024-11-04 07:21:44.098250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.457 [2024-11-04 07:21:44.098280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.457 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.457 [2024-11-04 07:21:44.106612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.457 [2024-11-04 07:21:44.106643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.457 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.457 [2024-11-04 07:21:44.115646] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.457 [2024-11-04 07:21:44.115676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.457 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.457 [2024-11-04 07:21:44.124037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.457 [2024-11-04 07:21:44.124066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.457 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.457 [2024-11-04 07:21:44.132398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.457 [2024-11-04 07:21:44.132428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.457 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.457 [2024-11-04 07:21:44.140976] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.457 [2024-11-04 07:21:44.141005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.457 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.457 [2024-11-04 07:21:44.149626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.457 [2024-11-04 07:21:44.149655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.457 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.457 [2024-11-04 07:21:44.158363] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.457 [2024-11-04 07:21:44.158401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.457 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.457 [2024-11-04 07:21:44.167341] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.457 [2024-11-04 07:21:44.167371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.457 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.457 [2024-11-04 07:21:44.175805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.457 [2024-11-04 07:21:44.175835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.457 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.457 [2024-11-04 07:21:44.184218] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.457 [2024-11-04 07:21:44.184256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.457 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.457 [2024-11-04 07:21:44.193025] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.457 [2024-11-04 07:21:44.193055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.457 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.457 [2024-11-04 07:21:44.202120] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.457 [2024-11-04 07:21:44.202162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.457 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.457 [2024-11-04 07:21:44.211407] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.457 [2024-11-04 07:21:44.211448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.457 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.457 [2024-11-04 07:21:44.220674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.457 [2024-11-04 07:21:44.220704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.457 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.457 [2024-11-04 07:21:44.229758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.457 [2024-11-04 07:21:44.229788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.457 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.457 [2024-11-04 07:21:44.238809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.457 [2024-11-04 07:21:44.238838] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.457 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.457 [2024-11-04 07:21:44.247512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.457 [2024-11-04 07:21:44.247542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.457 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.457 [2024-11-04 07:21:44.256214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.457 [2024-11-04 07:21:44.256244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.457 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.457 [2024-11-04 07:21:44.265014] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.457 [2024-11-04 07:21:44.265043] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.457 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.458 [2024-11-04 07:21:44.273739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.458 [2024-11-04 07:21:44.273770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.458 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.458 [2024-11-04 07:21:44.282478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.458 [2024-11-04 07:21:44.282507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.458 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.458 [2024-11-04 07:21:44.290941] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.458 [2024-11-04 07:21:44.290971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.458 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.717 [2024-11-04 07:21:44.299720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.717 [2024-11-04 07:21:44.299754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.717 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.717 [2024-11-04 07:21:44.312100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.717 [2024-11-04 07:21:44.312141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.717 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.717 [2024-11-04 07:21:44.319650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.717 [2024-11-04 07:21:44.319679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.717 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.717 [2024-11-04 07:21:44.331031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.717 [2024-11-04 07:21:44.331061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.717 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.717 [2024-11-04 07:21:44.339297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.717 [2024-11-04 07:21:44.339326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.717 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.717 [2024-11-04 07:21:44.349392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.717 [2024-11-04 07:21:44.349422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.717 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.717 [2024-11-04 07:21:44.358778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.717 [2024-11-04 07:21:44.358809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.717 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.717 [2024-11-04 07:21:44.366132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.717 [2024-11-04 07:21:44.366171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.717 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.717 [2024-11-04 07:21:44.377598] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.717 [2024-11-04 07:21:44.377628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.717 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.718 [2024-11-04 07:21:44.389625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.718 [2024-11-04 07:21:44.389656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.718 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.718 [2024-11-04 07:21:44.398026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.718 [2024-11-04 07:21:44.398056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.718 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.718 [2024-11-04 07:21:44.413071] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.718 [2024-11-04 07:21:44.413100] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.718 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.718 [2024-11-04 07:21:44.423734] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.718 [2024-11-04 07:21:44.423764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.718 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.718 [2024-11-04 07:21:44.440040] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.718 [2024-11-04 07:21:44.440070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.718 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.718 [2024-11-04 07:21:44.450793] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.718 [2024-11-04 07:21:44.450822] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.718 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.718 [2024-11-04 07:21:44.458529] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.718 [2024-11-04 07:21:44.458796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.718 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.718 [2024-11-04 07:21:44.469569] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.718 [2024-11-04 07:21:44.469679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.718 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.718 [2024-11-04 07:21:44.477716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.718 [2024-11-04 07:21:44.477801] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.718 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.718 [2024-11-04 07:21:44.488746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.718 [2024-11-04 07:21:44.488862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.718 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.718 [2024-11-04 07:21:44.497205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.718 [2024-11-04 07:21:44.497308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.718 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.718 [2024-11-04 07:21:44.507804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.718 [2024-11-04 07:21:44.507907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.718 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.718 [2024-11-04 07:21:44.515785] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.718 [2024-11-04 07:21:44.515908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.718 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.718 [2024-11-04 07:21:44.527069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.718 [2024-11-04 07:21:44.527097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.718 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.718 [2024-11-04 07:21:44.538521] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.718 [2024-11-04 07:21:44.538549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.718 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.718 [2024-11-04 07:21:44.546838] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.718 [2024-11-04 07:21:44.546865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.718 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.718 [2024-11-04 07:21:44.555597] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.718 [2024-11-04 07:21:44.555624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.978 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.978 [2024-11-04 07:21:44.564135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.978 [2024-11-04 07:21:44.564165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.978 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.978 [2024-11-04 07:21:44.572848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.978 [2024-11-04 07:21:44.572895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.978 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.978 [2024-11-04 07:21:44.586846] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.978 [2024-11-04 07:21:44.586882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.978 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.978 [2024-11-04 07:21:44.595231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.978 [2024-11-04 07:21:44.595259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.978 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.978 [2024-11-04 07:21:44.604973] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.978 [2024-11-04 07:21:44.605001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.978 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.978 [2024-11-04 07:21:44.612044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.978 [2024-11-04 07:21:44.612070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.978 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.978 [2024-11-04 07:21:44.623593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.978 [2024-11-04 07:21:44.623620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.978 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.978 [2024-11-04 07:21:44.632310] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.978 [2024-11-04 07:21:44.632338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.978 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.978 [2024-11-04 07:21:44.641097] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.978 [2024-11-04 07:21:44.641124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.978 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.978 [2024-11-04 07:21:44.649982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.978 [2024-11-04 07:21:44.650008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.978 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.978 [2024-11-04 07:21:44.659059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.978 [2024-11-04 07:21:44.659085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.978 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.979 [2024-11-04 07:21:44.667911] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.979 [2024-11-04 07:21:44.667937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.979 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.979 [2024-11-04 07:21:44.682085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.979 [2024-11-04 07:21:44.682112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.979 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.979 [2024-11-04 07:21:44.699086] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.979 [2024-11-04 07:21:44.699112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.979 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.979 [2024-11-04 07:21:44.715006] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.979 [2024-11-04 07:21:44.715034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.979 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.979 [2024-11-04 07:21:44.726895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.979 [2024-11-04 07:21:44.726922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.979 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.979 [2024-11-04 07:21:44.741417] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.979 [2024-11-04 07:21:44.741444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.979 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.979 [2024-11-04 07:21:44.750169] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.979 [2024-11-04 07:21:44.750196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.979 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.979 [2024-11-04 07:21:44.765742] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.979 [2024-11-04 07:21:44.765768] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.979 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.979 00:15:42.979 Latency(us) 00:15:42.979 [2024-11-04T07:21:44.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.979 [2024-11-04T07:21:44.820Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:42.979 Nvme1n1 : 5.01 14494.08 113.23 0.00 0.00 8821.37 3783.21 18588.39 00:15:42.979 [2024-11-04T07:21:44.820Z] =================================================================================================================== 00:15:42.979 [2024-11-04T07:21:44.820Z] Total : 14494.08 113.23 0.00 0.00 8821.37 3783.21 18588.39 00:15:42.979 [2024-11-04 07:21:44.770643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.979 [2024-11-04 07:21:44.770669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.979 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.979 [2024-11-04 07:21:44.778649] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.979 [2024-11-04 07:21:44.778674] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.979 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.979 [2024-11-04 07:21:44.786647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.979 [2024-11-04 07:21:44.786670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.979 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.979 [2024-11-04 07:21:44.794645] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.979 [2024-11-04 07:21:44.794665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.979 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.979 [2024-11-04 07:21:44.802647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.979 [2024-11-04 07:21:44.802667] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.979 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.979 [2024-11-04 07:21:44.810653] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.979 [2024-11-04 07:21:44.810674] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.979 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.239 [2024-11-04 07:21:44.818650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.239 [2024-11-04 07:21:44.818671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.239 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.239 [2024-11-04 07:21:44.826653] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.239 [2024-11-04 07:21:44.826673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.239 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.239 [2024-11-04 07:21:44.834654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.239 [2024-11-04 07:21:44.834677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.239 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.239 [2024-11-04 07:21:44.842657] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.239 [2024-11-04 07:21:44.842690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.239 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.239 [2024-11-04 07:21:44.850663] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.239 [2024-11-04 07:21:44.850685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.239 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.239 [2024-11-04 07:21:44.858662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.239 [2024-11-04 07:21:44.858682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.239 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.239 [2024-11-04 07:21:44.866663] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.239 [2024-11-04 07:21:44.866684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.239 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.239 [2024-11-04 07:21:44.874665] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.239 [2024-11-04 07:21:44.874686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.239 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.239 [2024-11-04 07:21:44.886673] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.239 [2024-11-04 07:21:44.886694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.239 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.239 [2024-11-04 07:21:44.894669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.239 [2024-11-04 07:21:44.894692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.239 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.239 [2024-11-04 07:21:44.902671] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.239 [2024-11-04 07:21:44.902691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.239 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.239 [2024-11-04 07:21:44.910674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.239 [2024-11-04 07:21:44.910694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.239 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.239 [2024-11-04 07:21:44.918677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.239 [2024-11-04 07:21:44.918699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.239 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.239 [2024-11-04 07:21:44.926677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.239 [2024-11-04 07:21:44.926698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.239 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.239 [2024-11-04 07:21:44.934680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.239 [2024-11-04 07:21:44.934703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.239 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.239 [2024-11-04 07:21:44.942680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.239 [2024-11-04 07:21:44.942700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.239 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.239 [2024-11-04 07:21:44.950682] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.239 [2024-11-04 07:21:44.950703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.239 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.239 [2024-11-04 07:21:44.958685] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.239 [2024-11-04 07:21:44.958705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.239 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.239 [2024-11-04 07:21:44.966687] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.239 [2024-11-04 07:21:44.966707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.239 2024/11/04 07:21:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.239 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (86083) - No such process 00:15:43.239 07:21:44 -- target/zcopy.sh@49 -- # wait 86083 00:15:43.239 07:21:44 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:43.239 07:21:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.239 07:21:44 -- common/autotest_common.sh@10 -- # set +x 00:15:43.239 07:21:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.239 07:21:44 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:43.239 07:21:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.239 07:21:44 -- common/autotest_common.sh@10 -- # set +x 00:15:43.239 delay0 00:15:43.239 07:21:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.240 07:21:44 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:43.240 07:21:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.240 07:21:44 -- common/autotest_common.sh@10 -- # set +x 00:15:43.240 07:21:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.240 07:21:44 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:43.498 [2024-11-04 07:21:45.165545] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:50.092 Initializing NVMe Controllers 00:15:50.092 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:50.092 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:50.092 Initialization complete. Launching workers. 00:15:50.092 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 161 00:15:50.092 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 448, failed to submit 33 00:15:50.092 success 287, unsuccess 161, failed 0 00:15:50.092 07:21:51 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:50.092 07:21:51 -- target/zcopy.sh@60 -- # nvmftestfini 00:15:50.092 07:21:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:50.092 07:21:51 -- nvmf/common.sh@116 -- # sync 00:15:50.092 07:21:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:50.092 07:21:51 -- nvmf/common.sh@119 -- # set +e 00:15:50.092 07:21:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:50.092 07:21:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:50.092 rmmod nvme_tcp 00:15:50.092 rmmod nvme_fabrics 00:15:50.092 rmmod nvme_keyring 00:15:50.092 07:21:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:50.092 07:21:51 -- nvmf/common.sh@123 -- # set -e 00:15:50.092 07:21:51 -- nvmf/common.sh@124 -- # return 0 00:15:50.092 07:21:51 -- nvmf/common.sh@477 -- # '[' -n 85914 ']' 00:15:50.092 07:21:51 -- nvmf/common.sh@478 -- # killprocess 85914 00:15:50.092 07:21:51 -- common/autotest_common.sh@926 -- # '[' -z 85914 ']' 00:15:50.092 07:21:51 -- common/autotest_common.sh@930 -- # kill -0 85914 00:15:50.092 07:21:51 -- common/autotest_common.sh@931 -- # uname 00:15:50.092 07:21:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:50.092 07:21:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85914 00:15:50.092 killing process with pid 85914 00:15:50.092 07:21:51 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:50.092 07:21:51 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:50.092 07:21:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85914' 00:15:50.092 07:21:51 -- common/autotest_common.sh@945 -- # kill 85914 00:15:50.092 07:21:51 -- common/autotest_common.sh@950 -- # wait 85914 00:15:50.092 07:21:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:50.092 07:21:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:50.092 07:21:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:50.092 07:21:51 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:50.092 07:21:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:50.092 07:21:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.092 07:21:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:50.092 07:21:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.092 07:21:51 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:50.092 00:15:50.092 real 0m24.613s 00:15:50.092 user 0m38.454s 00:15:50.092 sys 0m7.408s 00:15:50.092 07:21:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:50.092 07:21:51 -- common/autotest_common.sh@10 -- # set +x 00:15:50.092 ************************************ 00:15:50.092 END TEST nvmf_zcopy 00:15:50.092 ************************************ 00:15:50.092 07:21:51 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:50.092 07:21:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:50.092 07:21:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:50.092 07:21:51 -- common/autotest_common.sh@10 -- # set +x 00:15:50.092 ************************************ 00:15:50.092 START TEST nvmf_nmic 00:15:50.092 ************************************ 00:15:50.092 07:21:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:50.092 * Looking for test storage... 00:15:50.092 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:50.092 07:21:51 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:50.092 07:21:51 -- nvmf/common.sh@7 -- # uname -s 00:15:50.092 07:21:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:50.092 07:21:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:50.092 07:21:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:50.092 07:21:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:50.092 07:21:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:50.092 07:21:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:50.092 07:21:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:50.092 07:21:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:50.092 07:21:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:50.092 07:21:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:50.092 07:21:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:15:50.092 07:21:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:15:50.092 07:21:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:50.092 07:21:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:50.092 07:21:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:50.092 07:21:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:50.092 07:21:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:50.092 07:21:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:50.092 07:21:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:50.092 07:21:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.092 07:21:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.092 07:21:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.092 07:21:51 -- paths/export.sh@5 -- # export PATH 00:15:50.092 07:21:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.092 07:21:51 -- nvmf/common.sh@46 -- # : 0 00:15:50.092 07:21:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:50.092 07:21:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:50.092 07:21:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:50.092 07:21:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:50.092 07:21:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:50.092 07:21:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:50.092 07:21:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:50.092 07:21:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:50.092 07:21:51 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:50.092 07:21:51 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:50.092 07:21:51 -- target/nmic.sh@14 -- # nvmftestinit 00:15:50.092 07:21:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:50.092 07:21:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:50.092 07:21:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:50.092 07:21:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:50.092 07:21:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:50.092 07:21:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.092 07:21:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:50.092 07:21:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.092 07:21:51 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:50.092 07:21:51 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:50.092 07:21:51 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:50.092 07:21:51 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:50.092 07:21:51 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:50.092 07:21:51 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:50.092 07:21:51 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:50.092 07:21:51 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:50.092 07:21:51 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:50.092 07:21:51 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:50.092 07:21:51 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:50.092 07:21:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:50.092 07:21:51 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:50.092 07:21:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:50.092 07:21:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:50.092 07:21:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:50.092 07:21:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:50.093 07:21:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:50.093 07:21:51 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:50.093 07:21:51 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:50.093 Cannot find device "nvmf_tgt_br" 00:15:50.093 07:21:51 -- nvmf/common.sh@154 -- # true 00:15:50.093 07:21:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:50.093 Cannot find device "nvmf_tgt_br2" 00:15:50.093 07:21:51 -- nvmf/common.sh@155 -- # true 00:15:50.093 07:21:51 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:50.093 07:21:51 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:50.093 Cannot find device "nvmf_tgt_br" 00:15:50.093 07:21:51 -- nvmf/common.sh@157 -- # true 00:15:50.093 07:21:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:50.093 Cannot find device "nvmf_tgt_br2" 00:15:50.093 07:21:51 -- nvmf/common.sh@158 -- # true 00:15:50.093 07:21:51 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:50.093 07:21:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:50.351 07:21:51 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:50.351 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:50.351 07:21:51 -- nvmf/common.sh@161 -- # true 00:15:50.351 07:21:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:50.351 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:50.351 07:21:51 -- nvmf/common.sh@162 -- # true 00:15:50.351 07:21:51 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:50.352 07:21:51 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:50.352 07:21:51 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:50.352 07:21:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:50.352 07:21:51 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:50.352 07:21:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:50.352 07:21:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:50.352 07:21:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:50.352 07:21:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:50.352 07:21:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:50.352 07:21:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:50.352 07:21:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:50.352 07:21:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:50.352 07:21:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:50.352 07:21:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:50.352 07:21:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:50.352 07:21:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:50.352 07:21:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:50.352 07:21:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:50.352 07:21:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:50.352 07:21:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:50.352 07:21:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:50.352 07:21:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:50.352 07:21:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:50.352 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:50.352 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:15:50.352 00:15:50.352 --- 10.0.0.2 ping statistics --- 00:15:50.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.352 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:15:50.352 07:21:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:50.352 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:50.352 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:15:50.352 00:15:50.352 --- 10.0.0.3 ping statistics --- 00:15:50.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.352 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:15:50.352 07:21:52 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:50.352 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:50.352 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:50.352 00:15:50.352 --- 10.0.0.1 ping statistics --- 00:15:50.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.352 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:50.352 07:21:52 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:50.352 07:21:52 -- nvmf/common.sh@421 -- # return 0 00:15:50.352 07:21:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:50.352 07:21:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:50.352 07:21:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:50.352 07:21:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:50.352 07:21:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:50.352 07:21:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:50.352 07:21:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:50.352 07:21:52 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:50.352 07:21:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:50.352 07:21:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:50.352 07:21:52 -- common/autotest_common.sh@10 -- # set +x 00:15:50.352 07:21:52 -- nvmf/common.sh@469 -- # nvmfpid=86403 00:15:50.352 07:21:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:50.352 07:21:52 -- nvmf/common.sh@470 -- # waitforlisten 86403 00:15:50.352 07:21:52 -- common/autotest_common.sh@819 -- # '[' -z 86403 ']' 00:15:50.352 07:21:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.352 07:21:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:50.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.352 07:21:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.352 07:21:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:50.352 07:21:52 -- common/autotest_common.sh@10 -- # set +x 00:15:50.610 [2024-11-04 07:21:52.193744] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:50.611 [2024-11-04 07:21:52.193832] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.611 [2024-11-04 07:21:52.331617] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:50.611 [2024-11-04 07:21:52.402961] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:50.611 [2024-11-04 07:21:52.403113] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:50.611 [2024-11-04 07:21:52.403125] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:50.611 [2024-11-04 07:21:52.403133] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:50.611 [2024-11-04 07:21:52.403660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:50.611 [2024-11-04 07:21:52.403817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:50.611 [2024-11-04 07:21:52.403921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:50.611 [2024-11-04 07:21:52.403924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.545 07:21:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:51.545 07:21:53 -- common/autotest_common.sh@852 -- # return 0 00:15:51.545 07:21:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:51.545 07:21:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:51.545 07:21:53 -- common/autotest_common.sh@10 -- # set +x 00:15:51.545 07:21:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:51.545 07:21:53 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:51.545 07:21:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:51.545 07:21:53 -- common/autotest_common.sh@10 -- # set +x 00:15:51.545 [2024-11-04 07:21:53.255294] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:51.545 07:21:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:51.545 07:21:53 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:51.545 07:21:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:51.545 07:21:53 -- common/autotest_common.sh@10 -- # set +x 00:15:51.545 Malloc0 00:15:51.545 07:21:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:51.545 07:21:53 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:51.545 07:21:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:51.545 07:21:53 -- common/autotest_common.sh@10 -- # set +x 00:15:51.545 07:21:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:51.545 07:21:53 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:51.545 07:21:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:51.545 07:21:53 -- common/autotest_common.sh@10 -- # set +x 00:15:51.545 07:21:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:51.545 07:21:53 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:51.545 07:21:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:51.545 07:21:53 -- common/autotest_common.sh@10 -- # set +x 00:15:51.545 [2024-11-04 07:21:53.315078] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:51.545 07:21:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:51.545 07:21:53 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:51.545 test case1: single bdev can't be used in multiple subsystems 00:15:51.545 07:21:53 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:51.545 07:21:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:51.546 07:21:53 -- common/autotest_common.sh@10 -- # set +x 00:15:51.546 07:21:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:51.546 07:21:53 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:51.546 07:21:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:51.546 07:21:53 -- common/autotest_common.sh@10 -- # set +x 00:15:51.546 07:21:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:51.546 07:21:53 -- target/nmic.sh@28 -- # nmic_status=0 00:15:51.546 07:21:53 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:51.546 07:21:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:51.546 07:21:53 -- common/autotest_common.sh@10 -- # set +x 00:15:51.546 [2024-11-04 07:21:53.342932] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:51.546 [2024-11-04 07:21:53.342968] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:51.546 [2024-11-04 07:21:53.342978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.546 2024/11/04 07:21:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.546 request: 00:15:51.546 { 00:15:51.546 "method": "nvmf_subsystem_add_ns", 00:15:51.546 "params": { 00:15:51.546 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:51.546 "namespace": { 00:15:51.546 "bdev_name": "Malloc0" 00:15:51.546 } 00:15:51.546 } 00:15:51.546 } 00:15:51.546 Got JSON-RPC error response 00:15:51.546 GoRPCClient: error on JSON-RPC call 00:15:51.546 07:21:53 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:15:51.546 07:21:53 -- target/nmic.sh@29 -- # nmic_status=1 00:15:51.546 07:21:53 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:51.546 Adding namespace failed - expected result. 00:15:51.546 07:21:53 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:51.546 test case2: host connect to nvmf target in multiple paths 00:15:51.546 07:21:53 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:51.546 07:21:53 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:51.546 07:21:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:51.546 07:21:53 -- common/autotest_common.sh@10 -- # set +x 00:15:51.546 [2024-11-04 07:21:53.355064] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:51.546 07:21:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:51.546 07:21:53 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:51.804 07:21:53 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:52.062 07:21:53 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:52.062 07:21:53 -- common/autotest_common.sh@1177 -- # local i=0 00:15:52.062 07:21:53 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:52.062 07:21:53 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:15:52.062 07:21:53 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:53.966 07:21:55 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:53.966 07:21:55 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:53.966 07:21:55 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:53.966 07:21:55 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:15:53.966 07:21:55 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:53.966 07:21:55 -- common/autotest_common.sh@1187 -- # return 0 00:15:53.966 07:21:55 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:53.966 [global] 00:15:53.966 thread=1 00:15:53.966 invalidate=1 00:15:53.966 rw=write 00:15:53.966 time_based=1 00:15:53.966 runtime=1 00:15:53.966 ioengine=libaio 00:15:53.966 direct=1 00:15:53.966 bs=4096 00:15:53.966 iodepth=1 00:15:53.966 norandommap=0 00:15:53.966 numjobs=1 00:15:53.966 00:15:53.966 verify_dump=1 00:15:53.966 verify_backlog=512 00:15:53.966 verify_state_save=0 00:15:53.966 do_verify=1 00:15:53.966 verify=crc32c-intel 00:15:53.966 [job0] 00:15:53.966 filename=/dev/nvme0n1 00:15:53.966 Could not set queue depth (nvme0n1) 00:15:54.225 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:54.225 fio-3.35 00:15:54.225 Starting 1 thread 00:15:55.602 00:15:55.602 job0: (groupid=0, jobs=1): err= 0: pid=86517: Mon Nov 4 07:21:57 2024 00:15:55.602 read: IOPS=3356, BW=13.1MiB/s (13.7MB/s)(13.1MiB/1001msec) 00:15:55.602 slat (nsec): min=11711, max=62826, avg=14691.81, stdev=4574.12 00:15:55.602 clat (usec): min=115, max=471, avg=144.18, stdev=18.89 00:15:55.602 lat (usec): min=128, max=483, avg=158.88, stdev=20.03 00:15:55.602 clat percentiles (usec): 00:15:55.602 | 1.00th=[ 119], 5.00th=[ 123], 10.00th=[ 126], 20.00th=[ 130], 00:15:55.602 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 145], 00:15:55.602 | 70.00th=[ 149], 80.00th=[ 157], 90.00th=[ 169], 95.00th=[ 180], 00:15:55.602 | 99.00th=[ 204], 99.50th=[ 217], 99.90th=[ 233], 99.95th=[ 285], 00:15:55.602 | 99.99th=[ 474] 00:15:55.602 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:15:55.602 slat (usec): min=17, max=106, avg=22.85, stdev= 7.07 00:15:55.602 clat (usec): min=80, max=1734, avg=103.67, stdev=31.18 00:15:55.602 lat (usec): min=99, max=1753, avg=126.52, stdev=32.41 00:15:55.602 clat percentiles (usec): 00:15:55.602 | 1.00th=[ 84], 5.00th=[ 87], 10.00th=[ 89], 20.00th=[ 92], 00:15:55.602 | 30.00th=[ 94], 40.00th=[ 96], 50.00th=[ 99], 60.00th=[ 102], 00:15:55.602 | 70.00th=[ 106], 80.00th=[ 114], 90.00th=[ 126], 95.00th=[ 137], 00:15:55.602 | 99.00th=[ 151], 99.50th=[ 157], 99.90th=[ 180], 99.95th=[ 184], 00:15:55.602 | 99.99th=[ 1729] 00:15:55.602 bw ( KiB/s): min=15088, max=15088, per=100.00%, avg=15088.00, stdev= 0.00, samples=1 00:15:55.602 iops : min= 3772, max= 3772, avg=3772.00, stdev= 0.00, samples=1 00:15:55.602 lat (usec) : 100=28.08%, 250=71.88%, 500=0.03% 00:15:55.602 lat (msec) : 2=0.01% 00:15:55.602 cpu : usr=2.10%, sys=10.10%, ctx=6944, majf=0, minf=5 00:15:55.602 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:55.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:55.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:55.602 issued rwts: total=3360,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:55.602 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:55.602 00:15:55.602 Run status group 0 (all jobs): 00:15:55.602 READ: bw=13.1MiB/s (13.7MB/s), 13.1MiB/s-13.1MiB/s (13.7MB/s-13.7MB/s), io=13.1MiB (13.8MB), run=1001-1001msec 00:15:55.602 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:15:55.602 00:15:55.602 Disk stats (read/write): 00:15:55.602 nvme0n1: ios=3121/3113, merge=0/0, ticks=508/372, in_queue=880, util=91.47% 00:15:55.602 07:21:57 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:55.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:55.602 07:21:57 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:55.603 07:21:57 -- common/autotest_common.sh@1198 -- # local i=0 00:15:55.603 07:21:57 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:55.603 07:21:57 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:55.603 07:21:57 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:55.603 07:21:57 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:55.603 07:21:57 -- common/autotest_common.sh@1210 -- # return 0 00:15:55.603 07:21:57 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:55.603 07:21:57 -- target/nmic.sh@53 -- # nvmftestfini 00:15:55.603 07:21:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:55.603 07:21:57 -- nvmf/common.sh@116 -- # sync 00:15:55.603 07:21:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:55.603 07:21:57 -- nvmf/common.sh@119 -- # set +e 00:15:55.603 07:21:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:55.603 07:21:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:55.603 rmmod nvme_tcp 00:15:55.603 rmmod nvme_fabrics 00:15:55.603 rmmod nvme_keyring 00:15:55.603 07:21:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:55.603 07:21:57 -- nvmf/common.sh@123 -- # set -e 00:15:55.603 07:21:57 -- nvmf/common.sh@124 -- # return 0 00:15:55.603 07:21:57 -- nvmf/common.sh@477 -- # '[' -n 86403 ']' 00:15:55.603 07:21:57 -- nvmf/common.sh@478 -- # killprocess 86403 00:15:55.603 07:21:57 -- common/autotest_common.sh@926 -- # '[' -z 86403 ']' 00:15:55.603 07:21:57 -- common/autotest_common.sh@930 -- # kill -0 86403 00:15:55.603 07:21:57 -- common/autotest_common.sh@931 -- # uname 00:15:55.603 07:21:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:55.603 07:21:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 86403 00:15:55.603 killing process with pid 86403 00:15:55.603 07:21:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:55.603 07:21:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:55.603 07:21:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 86403' 00:15:55.603 07:21:57 -- common/autotest_common.sh@945 -- # kill 86403 00:15:55.603 07:21:57 -- common/autotest_common.sh@950 -- # wait 86403 00:15:55.862 07:21:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:55.862 07:21:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:55.862 07:21:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:55.862 07:21:57 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:55.862 07:21:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:55.862 07:21:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.862 07:21:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:55.862 07:21:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.862 07:21:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:55.862 00:15:55.862 real 0m5.929s 00:15:55.862 user 0m20.521s 00:15:55.862 sys 0m1.245s 00:15:55.862 07:21:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:55.862 07:21:57 -- common/autotest_common.sh@10 -- # set +x 00:15:55.862 ************************************ 00:15:55.862 END TEST nvmf_nmic 00:15:55.862 ************************************ 00:15:55.862 07:21:57 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:55.862 07:21:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:55.862 07:21:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:55.862 07:21:57 -- common/autotest_common.sh@10 -- # set +x 00:15:56.121 ************************************ 00:15:56.121 START TEST nvmf_fio_target 00:15:56.121 ************************************ 00:15:56.121 07:21:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:56.121 * Looking for test storage... 00:15:56.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:56.121 07:21:57 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:56.121 07:21:57 -- nvmf/common.sh@7 -- # uname -s 00:15:56.121 07:21:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:56.121 07:21:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:56.121 07:21:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:56.121 07:21:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:56.121 07:21:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:56.121 07:21:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:56.121 07:21:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:56.121 07:21:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:56.121 07:21:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:56.121 07:21:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:56.121 07:21:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:15:56.121 07:21:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:15:56.121 07:21:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:56.122 07:21:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:56.122 07:21:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:56.122 07:21:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:56.122 07:21:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:56.122 07:21:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:56.122 07:21:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:56.122 07:21:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.122 07:21:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.122 07:21:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.122 07:21:57 -- paths/export.sh@5 -- # export PATH 00:15:56.122 07:21:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.122 07:21:57 -- nvmf/common.sh@46 -- # : 0 00:15:56.122 07:21:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:56.122 07:21:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:56.122 07:21:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:56.122 07:21:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:56.122 07:21:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:56.122 07:21:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:56.122 07:21:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:56.122 07:21:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:56.122 07:21:57 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:56.122 07:21:57 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:56.122 07:21:57 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:56.122 07:21:57 -- target/fio.sh@16 -- # nvmftestinit 00:15:56.122 07:21:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:56.122 07:21:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:56.122 07:21:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:56.122 07:21:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:56.122 07:21:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:56.122 07:21:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.122 07:21:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:56.122 07:21:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.122 07:21:57 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:56.122 07:21:57 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:56.122 07:21:57 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:56.122 07:21:57 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:56.122 07:21:57 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:56.122 07:21:57 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:56.122 07:21:57 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:56.122 07:21:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:56.122 07:21:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:56.122 07:21:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:56.122 07:21:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:56.122 07:21:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:56.122 07:21:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:56.122 07:21:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:56.122 07:21:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:56.122 07:21:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:56.122 07:21:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:56.122 07:21:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:56.122 07:21:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:56.122 07:21:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:56.122 Cannot find device "nvmf_tgt_br" 00:15:56.122 07:21:57 -- nvmf/common.sh@154 -- # true 00:15:56.122 07:21:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:56.122 Cannot find device "nvmf_tgt_br2" 00:15:56.122 07:21:57 -- nvmf/common.sh@155 -- # true 00:15:56.122 07:21:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:56.122 07:21:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:56.122 Cannot find device "nvmf_tgt_br" 00:15:56.122 07:21:57 -- nvmf/common.sh@157 -- # true 00:15:56.122 07:21:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:56.122 Cannot find device "nvmf_tgt_br2" 00:15:56.122 07:21:57 -- nvmf/common.sh@158 -- # true 00:15:56.122 07:21:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:56.122 07:21:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:56.122 07:21:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:56.122 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:56.122 07:21:57 -- nvmf/common.sh@161 -- # true 00:15:56.122 07:21:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:56.122 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:56.122 07:21:57 -- nvmf/common.sh@162 -- # true 00:15:56.122 07:21:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:56.122 07:21:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:56.122 07:21:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:56.381 07:21:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:56.381 07:21:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:56.381 07:21:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:56.381 07:21:58 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:56.381 07:21:58 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:56.381 07:21:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:56.381 07:21:58 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:56.381 07:21:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:56.381 07:21:58 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:56.381 07:21:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:56.381 07:21:58 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:56.381 07:21:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:56.381 07:21:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:56.381 07:21:58 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:56.381 07:21:58 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:56.381 07:21:58 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:56.381 07:21:58 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:56.381 07:21:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:56.381 07:21:58 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:56.381 07:21:58 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:56.381 07:21:58 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:56.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:56.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:15:56.381 00:15:56.381 --- 10.0.0.2 ping statistics --- 00:15:56.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.381 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:15:56.381 07:21:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:56.381 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:56.381 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:15:56.381 00:15:56.381 --- 10.0.0.3 ping statistics --- 00:15:56.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.381 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:15:56.381 07:21:58 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:56.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:56.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:15:56.381 00:15:56.381 --- 10.0.0.1 ping statistics --- 00:15:56.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.381 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:15:56.381 07:21:58 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:56.381 07:21:58 -- nvmf/common.sh@421 -- # return 0 00:15:56.381 07:21:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:56.381 07:21:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:56.381 07:21:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:56.381 07:21:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:56.381 07:21:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:56.381 07:21:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:56.381 07:21:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:56.381 07:21:58 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:56.381 07:21:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:56.381 07:21:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:56.381 07:21:58 -- common/autotest_common.sh@10 -- # set +x 00:15:56.381 07:21:58 -- nvmf/common.sh@469 -- # nvmfpid=86697 00:15:56.381 07:21:58 -- nvmf/common.sh@470 -- # waitforlisten 86697 00:15:56.381 07:21:58 -- common/autotest_common.sh@819 -- # '[' -z 86697 ']' 00:15:56.382 07:21:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.382 07:21:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:56.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.382 07:21:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.382 07:21:58 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:56.382 07:21:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:56.382 07:21:58 -- common/autotest_common.sh@10 -- # set +x 00:15:56.640 [2024-11-04 07:21:58.225056] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:56.641 [2024-11-04 07:21:58.225140] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:56.641 [2024-11-04 07:21:58.364580] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:56.641 [2024-11-04 07:21:58.424822] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:56.641 [2024-11-04 07:21:58.424966] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:56.641 [2024-11-04 07:21:58.424978] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:56.641 [2024-11-04 07:21:58.424986] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:56.641 [2024-11-04 07:21:58.425162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.641 [2024-11-04 07:21:58.425641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:56.641 [2024-11-04 07:21:58.426263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:56.641 [2024-11-04 07:21:58.426320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.576 07:21:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:57.576 07:21:59 -- common/autotest_common.sh@852 -- # return 0 00:15:57.576 07:21:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:57.576 07:21:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:57.576 07:21:59 -- common/autotest_common.sh@10 -- # set +x 00:15:57.576 07:21:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:57.576 07:21:59 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:57.835 [2024-11-04 07:21:59.497176] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:57.835 07:21:59 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:58.093 07:21:59 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:58.093 07:21:59 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:58.352 07:22:00 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:58.352 07:22:00 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:58.920 07:22:00 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:58.920 07:22:00 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:59.178 07:22:00 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:59.178 07:22:00 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:59.178 07:22:00 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:59.437 07:22:01 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:59.437 07:22:01 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:59.695 07:22:01 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:59.695 07:22:01 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:59.954 07:22:01 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:59.954 07:22:01 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:00.213 07:22:01 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:00.471 07:22:02 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:00.471 07:22:02 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:00.730 07:22:02 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:00.730 07:22:02 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:00.989 07:22:02 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:01.247 [2024-11-04 07:22:02.883531] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:01.247 07:22:02 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:01.505 07:22:03 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:01.763 07:22:03 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:01.763 07:22:03 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:01.763 07:22:03 -- common/autotest_common.sh@1177 -- # local i=0 00:16:01.763 07:22:03 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:16:01.763 07:22:03 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:16:01.763 07:22:03 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:16:01.763 07:22:03 -- common/autotest_common.sh@1184 -- # sleep 2 00:16:04.296 07:22:05 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:16:04.296 07:22:05 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:16:04.296 07:22:05 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:16:04.296 07:22:05 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:16:04.296 07:22:05 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:16:04.296 07:22:05 -- common/autotest_common.sh@1187 -- # return 0 00:16:04.296 07:22:05 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:04.296 [global] 00:16:04.296 thread=1 00:16:04.296 invalidate=1 00:16:04.296 rw=write 00:16:04.296 time_based=1 00:16:04.296 runtime=1 00:16:04.296 ioengine=libaio 00:16:04.296 direct=1 00:16:04.296 bs=4096 00:16:04.296 iodepth=1 00:16:04.296 norandommap=0 00:16:04.296 numjobs=1 00:16:04.296 00:16:04.296 verify_dump=1 00:16:04.296 verify_backlog=512 00:16:04.296 verify_state_save=0 00:16:04.296 do_verify=1 00:16:04.296 verify=crc32c-intel 00:16:04.296 [job0] 00:16:04.296 filename=/dev/nvme0n1 00:16:04.296 [job1] 00:16:04.296 filename=/dev/nvme0n2 00:16:04.296 [job2] 00:16:04.296 filename=/dev/nvme0n3 00:16:04.296 [job3] 00:16:04.296 filename=/dev/nvme0n4 00:16:04.296 Could not set queue depth (nvme0n1) 00:16:04.296 Could not set queue depth (nvme0n2) 00:16:04.296 Could not set queue depth (nvme0n3) 00:16:04.296 Could not set queue depth (nvme0n4) 00:16:04.296 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:04.296 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:04.296 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:04.296 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:04.296 fio-3.35 00:16:04.296 Starting 4 threads 00:16:05.239 00:16:05.239 job0: (groupid=0, jobs=1): err= 0: pid=86991: Mon Nov 4 07:22:06 2024 00:16:05.239 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:16:05.239 slat (nsec): min=10521, max=62561, avg=16232.67, stdev=4906.47 00:16:05.239 clat (usec): min=166, max=1061, avg=323.44, stdev=98.01 00:16:05.239 lat (usec): min=183, max=1077, avg=339.67, stdev=97.21 00:16:05.239 clat percentiles (usec): 00:16:05.239 | 1.00th=[ 182], 5.00th=[ 192], 10.00th=[ 200], 20.00th=[ 215], 00:16:05.239 | 30.00th=[ 241], 40.00th=[ 314], 50.00th=[ 330], 60.00th=[ 347], 00:16:05.239 | 70.00th=[ 359], 80.00th=[ 400], 90.00th=[ 469], 95.00th=[ 490], 00:16:05.239 | 99.00th=[ 537], 99.50th=[ 570], 99.90th=[ 644], 99.95th=[ 1057], 00:16:05.239 | 99.99th=[ 1057] 00:16:05.239 write: IOPS=1893, BW=7572KiB/s (7754kB/s)(7580KiB/1001msec); 0 zone resets 00:16:05.239 slat (nsec): min=11143, max=83966, avg=23148.26, stdev=7442.05 00:16:05.239 clat (usec): min=133, max=646, avg=225.84, stdev=65.55 00:16:05.239 lat (usec): min=158, max=665, avg=248.98, stdev=63.86 00:16:05.239 clat percentiles (usec): 00:16:05.239 | 1.00th=[ 143], 5.00th=[ 153], 10.00th=[ 161], 20.00th=[ 172], 00:16:05.239 | 30.00th=[ 180], 40.00th=[ 190], 50.00th=[ 202], 60.00th=[ 221], 00:16:05.239 | 70.00th=[ 258], 80.00th=[ 293], 90.00th=[ 326], 95.00th=[ 343], 00:16:05.239 | 99.00th=[ 379], 99.50th=[ 392], 99.90th=[ 570], 99.95th=[ 644], 00:16:05.239 | 99.99th=[ 644] 00:16:05.239 bw ( KiB/s): min= 8192, max= 8192, per=29.99%, avg=8192.00, stdev= 0.00, samples=1 00:16:05.239 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:05.239 lat (usec) : 250=52.00%, 500=46.25%, 750=1.72% 00:16:05.239 lat (msec) : 2=0.03% 00:16:05.239 cpu : usr=1.40%, sys=5.20%, ctx=3432, majf=0, minf=7 00:16:05.239 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:05.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.239 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.239 issued rwts: total=1536,1895,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.239 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:05.239 job1: (groupid=0, jobs=1): err= 0: pid=86992: Mon Nov 4 07:22:06 2024 00:16:05.239 read: IOPS=1472, BW=5890KiB/s (6031kB/s)(5896KiB/1001msec) 00:16:05.239 slat (nsec): min=9741, max=82746, avg=16483.18, stdev=6972.48 00:16:05.239 clat (usec): min=201, max=3920, avg=363.08, stdev=149.93 00:16:05.239 lat (usec): min=217, max=3953, avg=379.56, stdev=150.35 00:16:05.239 clat percentiles (usec): 00:16:05.239 | 1.00th=[ 265], 5.00th=[ 285], 10.00th=[ 297], 20.00th=[ 314], 00:16:05.239 | 30.00th=[ 322], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 351], 00:16:05.239 | 70.00th=[ 363], 80.00th=[ 416], 90.00th=[ 453], 95.00th=[ 469], 00:16:05.239 | 99.00th=[ 502], 99.50th=[ 529], 99.90th=[ 3589], 99.95th=[ 3916], 00:16:05.239 | 99.99th=[ 3916] 00:16:05.239 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:05.239 slat (usec): min=10, max=125, avg=22.94, stdev= 7.35 00:16:05.239 clat (usec): min=143, max=631, avg=260.08, stdev=54.17 00:16:05.239 lat (usec): min=160, max=651, avg=283.02, stdev=55.48 00:16:05.239 clat percentiles (usec): 00:16:05.239 | 1.00th=[ 155], 5.00th=[ 172], 10.00th=[ 182], 20.00th=[ 206], 00:16:05.239 | 30.00th=[ 231], 40.00th=[ 249], 50.00th=[ 265], 60.00th=[ 281], 00:16:05.239 | 70.00th=[ 293], 80.00th=[ 310], 90.00th=[ 326], 95.00th=[ 343], 00:16:05.239 | 99.00th=[ 367], 99.50th=[ 379], 99.90th=[ 424], 99.95th=[ 635], 00:16:05.239 | 99.99th=[ 635] 00:16:05.239 bw ( KiB/s): min= 8192, max= 8192, per=29.99%, avg=8192.00, stdev= 0.00, samples=1 00:16:05.239 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:05.239 lat (usec) : 250=20.80%, 500=78.60%, 750=0.43% 00:16:05.239 lat (msec) : 2=0.10%, 4=0.07% 00:16:05.239 cpu : usr=1.20%, sys=4.60%, ctx=3012, majf=0, minf=13 00:16:05.239 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:05.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.239 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.239 issued rwts: total=1474,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.239 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:05.239 job2: (groupid=0, jobs=1): err= 0: pid=86993: Mon Nov 4 07:22:06 2024 00:16:05.239 read: IOPS=1474, BW=5898KiB/s (6040kB/s)(5904KiB/1001msec) 00:16:05.239 slat (nsec): min=9839, max=48642, avg=15297.19, stdev=4354.84 00:16:05.239 clat (usec): min=144, max=3844, avg=353.39, stdev=175.17 00:16:05.239 lat (usec): min=160, max=3859, avg=368.68, stdev=174.98 00:16:05.239 clat percentiles (usec): 00:16:05.239 | 1.00th=[ 161], 5.00th=[ 186], 10.00th=[ 235], 20.00th=[ 302], 00:16:05.239 | 30.00th=[ 318], 40.00th=[ 330], 50.00th=[ 343], 60.00th=[ 351], 00:16:05.239 | 70.00th=[ 367], 80.00th=[ 416], 90.00th=[ 453], 95.00th=[ 478], 00:16:05.239 | 99.00th=[ 529], 99.50th=[ 586], 99.90th=[ 3589], 99.95th=[ 3851], 00:16:05.239 | 99.99th=[ 3851] 00:16:05.239 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:05.239 slat (nsec): min=11427, max=78636, avg=22982.60, stdev=7610.45 00:16:05.239 clat (usec): min=148, max=906, avg=269.90, stdev=54.41 00:16:05.239 lat (usec): min=167, max=928, avg=292.88, stdev=55.11 00:16:05.239 clat percentiles (usec): 00:16:05.239 | 1.00th=[ 169], 5.00th=[ 186], 10.00th=[ 200], 20.00th=[ 223], 00:16:05.239 | 30.00th=[ 241], 40.00th=[ 255], 50.00th=[ 269], 60.00th=[ 285], 00:16:05.239 | 70.00th=[ 297], 80.00th=[ 314], 90.00th=[ 338], 95.00th=[ 355], 00:16:05.240 | 99.00th=[ 396], 99.50th=[ 404], 99.90th=[ 420], 99.95th=[ 906], 00:16:05.240 | 99.99th=[ 906] 00:16:05.240 bw ( KiB/s): min= 8208, max= 8208, per=30.05%, avg=8208.00, stdev= 0.00, samples=1 00:16:05.240 iops : min= 2052, max= 2052, avg=2052.00, stdev= 0.00, samples=1 00:16:05.240 lat (usec) : 250=23.67%, 500=75.20%, 750=0.93%, 1000=0.03% 00:16:05.240 lat (msec) : 2=0.07%, 4=0.10% 00:16:05.240 cpu : usr=1.30%, sys=4.50%, ctx=3013, majf=0, minf=11 00:16:05.240 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:05.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.240 issued rwts: total=1476,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.240 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:05.240 job3: (groupid=0, jobs=1): err= 0: pid=86994: Mon Nov 4 07:22:06 2024 00:16:05.240 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:16:05.240 slat (nsec): min=10214, max=64915, avg=16650.51, stdev=5284.20 00:16:05.240 clat (usec): min=173, max=2077, avg=313.26, stdev=95.02 00:16:05.240 lat (usec): min=192, max=2095, avg=329.91, stdev=93.49 00:16:05.240 clat percentiles (usec): 00:16:05.240 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 200], 20.00th=[ 215], 00:16:05.240 | 30.00th=[ 243], 40.00th=[ 302], 50.00th=[ 326], 60.00th=[ 338], 00:16:05.240 | 70.00th=[ 355], 80.00th=[ 383], 90.00th=[ 433], 95.00th=[ 449], 00:16:05.240 | 99.00th=[ 478], 99.50th=[ 490], 99.90th=[ 537], 99.95th=[ 2073], 00:16:05.240 | 99.99th=[ 2073] 00:16:05.240 write: IOPS=1867, BW=7469KiB/s (7648kB/s)(7476KiB/1001msec); 0 zone resets 00:16:05.240 slat (usec): min=11, max=169, avg=24.51, stdev= 9.02 00:16:05.240 clat (usec): min=130, max=747, avg=235.91, stdev=67.40 00:16:05.240 lat (usec): min=159, max=768, avg=260.42, stdev=66.33 00:16:05.240 clat percentiles (usec): 00:16:05.240 | 1.00th=[ 145], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 174], 00:16:05.240 | 30.00th=[ 186], 40.00th=[ 198], 50.00th=[ 215], 60.00th=[ 245], 00:16:05.240 | 70.00th=[ 285], 80.00th=[ 310], 90.00th=[ 330], 95.00th=[ 347], 00:16:05.240 | 99.00th=[ 383], 99.50th=[ 400], 99.90th=[ 529], 99.95th=[ 750], 00:16:05.240 | 99.99th=[ 750] 00:16:05.240 bw ( KiB/s): min= 8192, max= 8192, per=29.99%, avg=8192.00, stdev= 0.00, samples=1 00:16:05.240 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:05.240 lat (usec) : 250=47.81%, 500=51.92%, 750=0.23% 00:16:05.240 lat (msec) : 4=0.03% 00:16:05.240 cpu : usr=1.40%, sys=5.30%, ctx=3405, majf=0, minf=7 00:16:05.240 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:05.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.240 issued rwts: total=1536,1869,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.240 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:05.240 00:16:05.240 Run status group 0 (all jobs): 00:16:05.240 READ: bw=23.5MiB/s (24.6MB/s), 5890KiB/s-6138KiB/s (6031kB/s-6285kB/s), io=23.5MiB (24.7MB), run=1001-1001msec 00:16:05.240 WRITE: bw=26.7MiB/s (28.0MB/s), 6138KiB/s-7572KiB/s (6285kB/s-7754kB/s), io=26.7MiB (28.0MB), run=1001-1001msec 00:16:05.240 00:16:05.240 Disk stats (read/write): 00:16:05.240 nvme0n1: ios=1462/1536, merge=0/0, ticks=471/348, in_queue=819, util=88.08% 00:16:05.240 nvme0n2: ios=1143/1536, merge=0/0, ticks=453/409, in_queue=862, util=88.65% 00:16:05.240 nvme0n3: ios=1108/1536, merge=0/0, ticks=384/417, in_queue=801, util=88.60% 00:16:05.240 nvme0n4: ios=1389/1536, merge=0/0, ticks=443/360, in_queue=803, util=89.78% 00:16:05.240 07:22:06 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:05.240 [global] 00:16:05.240 thread=1 00:16:05.240 invalidate=1 00:16:05.240 rw=randwrite 00:16:05.240 time_based=1 00:16:05.240 runtime=1 00:16:05.240 ioengine=libaio 00:16:05.240 direct=1 00:16:05.240 bs=4096 00:16:05.240 iodepth=1 00:16:05.240 norandommap=0 00:16:05.240 numjobs=1 00:16:05.240 00:16:05.240 verify_dump=1 00:16:05.240 verify_backlog=512 00:16:05.240 verify_state_save=0 00:16:05.240 do_verify=1 00:16:05.240 verify=crc32c-intel 00:16:05.240 [job0] 00:16:05.240 filename=/dev/nvme0n1 00:16:05.240 [job1] 00:16:05.240 filename=/dev/nvme0n2 00:16:05.240 [job2] 00:16:05.240 filename=/dev/nvme0n3 00:16:05.240 [job3] 00:16:05.240 filename=/dev/nvme0n4 00:16:05.240 Could not set queue depth (nvme0n1) 00:16:05.240 Could not set queue depth (nvme0n2) 00:16:05.240 Could not set queue depth (nvme0n3) 00:16:05.240 Could not set queue depth (nvme0n4) 00:16:05.497 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:05.497 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:05.497 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:05.497 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:05.497 fio-3.35 00:16:05.497 Starting 4 threads 00:16:06.874 00:16:06.874 job0: (groupid=0, jobs=1): err= 0: pid=87047: Mon Nov 4 07:22:08 2024 00:16:06.874 read: IOPS=1298, BW=5195KiB/s (5319kB/s)(5200KiB/1001msec) 00:16:06.874 slat (nsec): min=8945, max=51367, avg=13933.66, stdev=3984.62 00:16:06.874 clat (usec): min=176, max=2322, avg=374.27, stdev=81.07 00:16:06.874 lat (usec): min=187, max=2333, avg=388.21, stdev=81.47 00:16:06.874 clat percentiles (usec): 00:16:06.875 | 1.00th=[ 219], 5.00th=[ 258], 10.00th=[ 314], 20.00th=[ 334], 00:16:06.875 | 30.00th=[ 347], 40.00th=[ 359], 50.00th=[ 371], 60.00th=[ 383], 00:16:06.875 | 70.00th=[ 396], 80.00th=[ 412], 90.00th=[ 449], 95.00th=[ 486], 00:16:06.875 | 99.00th=[ 529], 99.50th=[ 562], 99.90th=[ 676], 99.95th=[ 2311], 00:16:06.875 | 99.99th=[ 2311] 00:16:06.875 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:06.875 slat (nsec): min=10976, max=90130, avg=22943.40, stdev=6679.73 00:16:06.875 clat (usec): min=122, max=490, avg=296.17, stdev=53.37 00:16:06.875 lat (usec): min=144, max=510, avg=319.11, stdev=53.98 00:16:06.875 clat percentiles (usec): 00:16:06.875 | 1.00th=[ 149], 5.00th=[ 227], 10.00th=[ 237], 20.00th=[ 249], 00:16:06.875 | 30.00th=[ 265], 40.00th=[ 277], 50.00th=[ 293], 60.00th=[ 310], 00:16:06.875 | 70.00th=[ 326], 80.00th=[ 343], 90.00th=[ 363], 95.00th=[ 388], 00:16:06.875 | 99.00th=[ 429], 99.50th=[ 441], 99.90th=[ 453], 99.95th=[ 490], 00:16:06.875 | 99.99th=[ 490] 00:16:06.875 bw ( KiB/s): min= 7200, max= 7200, per=21.99%, avg=7200.00, stdev= 0.00, samples=1 00:16:06.875 iops : min= 1800, max= 1800, avg=1800.00, stdev= 0.00, samples=1 00:16:06.875 lat (usec) : 250=12.80%, 500=85.90%, 750=1.27% 00:16:06.875 lat (msec) : 4=0.04% 00:16:06.875 cpu : usr=1.30%, sys=3.90%, ctx=2836, majf=0, minf=13 00:16:06.875 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:06.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.875 issued rwts: total=1300,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.875 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:06.875 job1: (groupid=0, jobs=1): err= 0: pid=87048: Mon Nov 4 07:22:08 2024 00:16:06.875 read: IOPS=2220, BW=8883KiB/s (9096kB/s)(8892KiB/1001msec) 00:16:06.875 slat (nsec): min=11404, max=54214, avg=15973.10, stdev=4226.38 00:16:06.875 clat (usec): min=146, max=688, avg=208.73, stdev=28.92 00:16:06.875 lat (usec): min=161, max=705, avg=224.70, stdev=29.14 00:16:06.875 clat percentiles (usec): 00:16:06.875 | 1.00th=[ 155], 5.00th=[ 165], 10.00th=[ 174], 20.00th=[ 186], 00:16:06.875 | 30.00th=[ 196], 40.00th=[ 202], 50.00th=[ 208], 60.00th=[ 215], 00:16:06.875 | 70.00th=[ 223], 80.00th=[ 231], 90.00th=[ 241], 95.00th=[ 253], 00:16:06.875 | 99.00th=[ 285], 99.50th=[ 297], 99.90th=[ 359], 99.95th=[ 383], 00:16:06.875 | 99.99th=[ 693] 00:16:06.875 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:16:06.875 slat (nsec): min=17039, max=93394, avg=23633.82, stdev=7774.88 00:16:06.875 clat (usec): min=105, max=874, avg=168.40, stdev=31.68 00:16:06.875 lat (usec): min=123, max=901, avg=192.03, stdev=33.57 00:16:06.875 clat percentiles (usec): 00:16:06.875 | 1.00th=[ 120], 5.00th=[ 135], 10.00th=[ 143], 20.00th=[ 149], 00:16:06.875 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 165], 60.00th=[ 169], 00:16:06.875 | 70.00th=[ 176], 80.00th=[ 186], 90.00th=[ 198], 95.00th=[ 212], 00:16:06.875 | 99.00th=[ 253], 99.50th=[ 277], 99.90th=[ 570], 99.95th=[ 578], 00:16:06.875 | 99.99th=[ 873] 00:16:06.875 bw ( KiB/s): min=10664, max=10664, per=32.58%, avg=10664.00, stdev= 0.00, samples=1 00:16:06.875 iops : min= 2666, max= 2666, avg=2666.00, stdev= 0.00, samples=1 00:16:06.875 lat (usec) : 250=96.76%, 500=3.16%, 750=0.06%, 1000=0.02% 00:16:06.875 cpu : usr=1.40%, sys=7.50%, ctx=4787, majf=0, minf=11 00:16:06.875 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:06.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.875 issued rwts: total=2223,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.875 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:06.875 job2: (groupid=0, jobs=1): err= 0: pid=87049: Mon Nov 4 07:22:08 2024 00:16:06.875 read: IOPS=2131, BW=8527KiB/s (8732kB/s)(8536KiB/1001msec) 00:16:06.875 slat (nsec): min=11865, max=48934, avg=13946.92, stdev=4052.40 00:16:06.875 clat (usec): min=171, max=531, avg=219.02, stdev=22.56 00:16:06.875 lat (usec): min=184, max=544, avg=232.96, stdev=22.85 00:16:06.875 clat percentiles (usec): 00:16:06.875 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 200], 00:16:06.875 | 30.00th=[ 206], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 223], 00:16:06.875 | 70.00th=[ 229], 80.00th=[ 237], 90.00th=[ 247], 95.00th=[ 255], 00:16:06.875 | 99.00th=[ 281], 99.50th=[ 289], 99.90th=[ 310], 99.95th=[ 318], 00:16:06.875 | 99.99th=[ 529] 00:16:06.875 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:16:06.875 slat (usec): min=18, max=101, avg=21.22, stdev= 6.11 00:16:06.875 clat (usec): min=129, max=317, avg=172.17, stdev=20.04 00:16:06.875 lat (usec): min=153, max=371, avg=193.39, stdev=21.78 00:16:06.875 clat percentiles (usec): 00:16:06.875 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 155], 00:16:06.875 | 30.00th=[ 159], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 176], 00:16:06.875 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 200], 95.00th=[ 208], 00:16:06.875 | 99.00th=[ 235], 99.50th=[ 243], 99.90th=[ 269], 99.95th=[ 269], 00:16:06.875 | 99.99th=[ 318] 00:16:06.875 bw ( KiB/s): min=10448, max=10448, per=31.92%, avg=10448.00, stdev= 0.00, samples=1 00:16:06.875 iops : min= 2612, max= 2612, avg=2612.00, stdev= 0.00, samples=1 00:16:06.875 lat (usec) : 250=95.95%, 500=4.03%, 750=0.02% 00:16:06.875 cpu : usr=1.20%, sys=6.40%, ctx=4694, majf=0, minf=11 00:16:06.875 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:06.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.875 issued rwts: total=2134,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.875 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:06.875 job3: (groupid=0, jobs=1): err= 0: pid=87050: Mon Nov 4 07:22:08 2024 00:16:06.875 read: IOPS=1299, BW=5199KiB/s (5324kB/s)(5204KiB/1001msec) 00:16:06.875 slat (nsec): min=8621, max=51141, avg=14007.18, stdev=4250.60 00:16:06.875 clat (usec): min=194, max=2521, avg=373.93, stdev=84.98 00:16:06.875 lat (usec): min=206, max=2534, avg=387.93, stdev=85.38 00:16:06.875 clat percentiles (usec): 00:16:06.875 | 1.00th=[ 217], 5.00th=[ 258], 10.00th=[ 310], 20.00th=[ 334], 00:16:06.875 | 30.00th=[ 347], 40.00th=[ 359], 50.00th=[ 367], 60.00th=[ 379], 00:16:06.875 | 70.00th=[ 396], 80.00th=[ 412], 90.00th=[ 453], 95.00th=[ 490], 00:16:06.875 | 99.00th=[ 529], 99.50th=[ 545], 99.90th=[ 578], 99.95th=[ 2507], 00:16:06.875 | 99.99th=[ 2507] 00:16:06.875 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:06.875 slat (usec): min=10, max=147, avg=23.04, stdev= 7.49 00:16:06.875 clat (usec): min=141, max=452, avg=295.98, stdev=50.61 00:16:06.875 lat (usec): min=163, max=478, avg=319.02, stdev=51.30 00:16:06.875 clat percentiles (usec): 00:16:06.875 | 1.00th=[ 182], 5.00th=[ 225], 10.00th=[ 235], 20.00th=[ 249], 00:16:06.875 | 30.00th=[ 265], 40.00th=[ 277], 50.00th=[ 293], 60.00th=[ 306], 00:16:06.875 | 70.00th=[ 326], 80.00th=[ 343], 90.00th=[ 367], 95.00th=[ 379], 00:16:06.875 | 99.00th=[ 412], 99.50th=[ 420], 99.90th=[ 445], 99.95th=[ 453], 00:16:06.875 | 99.99th=[ 453] 00:16:06.875 bw ( KiB/s): min= 7200, max= 7200, per=21.99%, avg=7200.00, stdev= 0.00, samples=1 00:16:06.875 iops : min= 1800, max= 1800, avg=1800.00, stdev= 0.00, samples=1 00:16:06.875 lat (usec) : 250=13.54%, 500=85.13%, 750=1.30% 00:16:06.875 lat (msec) : 4=0.04% 00:16:06.875 cpu : usr=1.70%, sys=3.50%, ctx=2839, majf=0, minf=9 00:16:06.875 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:06.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.875 issued rwts: total=1301,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.875 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:06.875 00:16:06.875 Run status group 0 (all jobs): 00:16:06.875 READ: bw=27.2MiB/s (28.5MB/s), 5195KiB/s-8883KiB/s (5319kB/s-9096kB/s), io=27.2MiB (28.5MB), run=1001-1001msec 00:16:06.875 WRITE: bw=32.0MiB/s (33.5MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=32.0MiB (33.6MB), run=1001-1001msec 00:16:06.875 00:16:06.875 Disk stats (read/write): 00:16:06.875 nvme0n1: ios=1074/1438, merge=0/0, ticks=412/443, in_queue=855, util=88.38% 00:16:06.875 nvme0n2: ios=2082/2048, merge=0/0, ticks=487/373, in_queue=860, util=89.37% 00:16:06.875 nvme0n3: ios=1969/2048, merge=0/0, ticks=441/372, in_queue=813, util=89.07% 00:16:06.875 nvme0n4: ios=1024/1438, merge=0/0, ticks=391/430, in_queue=821, util=89.63% 00:16:06.875 07:22:08 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:06.875 [global] 00:16:06.875 thread=1 00:16:06.875 invalidate=1 00:16:06.875 rw=write 00:16:06.875 time_based=1 00:16:06.875 runtime=1 00:16:06.875 ioengine=libaio 00:16:06.875 direct=1 00:16:06.875 bs=4096 00:16:06.875 iodepth=128 00:16:06.875 norandommap=0 00:16:06.875 numjobs=1 00:16:06.875 00:16:06.875 verify_dump=1 00:16:06.875 verify_backlog=512 00:16:06.875 verify_state_save=0 00:16:06.875 do_verify=1 00:16:06.875 verify=crc32c-intel 00:16:06.875 [job0] 00:16:06.875 filename=/dev/nvme0n1 00:16:06.875 [job1] 00:16:06.875 filename=/dev/nvme0n2 00:16:06.875 [job2] 00:16:06.875 filename=/dev/nvme0n3 00:16:06.875 [job3] 00:16:06.875 filename=/dev/nvme0n4 00:16:06.875 Could not set queue depth (nvme0n1) 00:16:06.875 Could not set queue depth (nvme0n2) 00:16:06.875 Could not set queue depth (nvme0n3) 00:16:06.875 Could not set queue depth (nvme0n4) 00:16:06.875 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:06.875 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:06.875 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:06.875 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:06.876 fio-3.35 00:16:06.876 Starting 4 threads 00:16:08.254 00:16:08.254 job0: (groupid=0, jobs=1): err= 0: pid=87110: Mon Nov 4 07:22:09 2024 00:16:08.254 read: IOPS=2041, BW=8167KiB/s (8364kB/s)(8192KiB/1003msec) 00:16:08.254 slat (usec): min=4, max=7097, avg=225.90, stdev=876.81 00:16:08.254 clat (usec): min=21994, max=35335, avg=29328.87, stdev=2036.02 00:16:08.254 lat (usec): min=24149, max=35385, avg=29554.78, stdev=1890.07 00:16:08.254 clat percentiles (usec): 00:16:08.254 | 1.00th=[24249], 5.00th=[25297], 10.00th=[26870], 20.00th=[28443], 00:16:08.254 | 30.00th=[28705], 40.00th=[29230], 50.00th=[29230], 60.00th=[29754], 00:16:08.254 | 70.00th=[30016], 80.00th=[30540], 90.00th=[31065], 95.00th=[32900], 00:16:08.254 | 99.00th=[34866], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:16:08.254 | 99.99th=[35390] 00:16:08.254 write: IOPS=2314, BW=9256KiB/s (9478kB/s)(9284KiB/1003msec); 0 zone resets 00:16:08.254 slat (usec): min=18, max=7885, avg=222.67, stdev=1064.46 00:16:08.254 clat (usec): min=568, max=35405, avg=28312.93, stdev=3599.12 00:16:08.254 lat (usec): min=7047, max=35434, avg=28535.60, stdev=3453.01 00:16:08.254 clat percentiles (usec): 00:16:08.254 | 1.00th=[ 7701], 5.00th=[22152], 10.00th=[26870], 20.00th=[27657], 00:16:08.254 | 30.00th=[27919], 40.00th=[28443], 50.00th=[28705], 60.00th=[29230], 00:16:08.254 | 70.00th=[29754], 80.00th=[29754], 90.00th=[30540], 95.00th=[31851], 00:16:08.254 | 99.00th=[33424], 99.50th=[35390], 99.90th=[35390], 99.95th=[35390], 00:16:08.254 | 99.99th=[35390] 00:16:08.254 bw ( KiB/s): min= 8488, max= 9072, per=17.82%, avg=8780.00, stdev=412.95, samples=2 00:16:08.254 iops : min= 2122, max= 2268, avg=2195.00, stdev=103.24, samples=2 00:16:08.254 lat (usec) : 750=0.02% 00:16:08.254 lat (msec) : 10=0.73%, 20=0.76%, 50=98.49% 00:16:08.254 cpu : usr=2.59%, sys=6.99%, ctx=262, majf=0, minf=17 00:16:08.254 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:16:08.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.254 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:08.254 issued rwts: total=2048,2321,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.254 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:08.254 job1: (groupid=0, jobs=1): err= 0: pid=87111: Mon Nov 4 07:22:09 2024 00:16:08.254 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:16:08.254 slat (usec): min=4, max=8487, avg=113.48, stdev=590.37 00:16:08.254 clat (usec): min=7042, max=35470, avg=15037.79, stdev=5341.97 00:16:08.254 lat (usec): min=7073, max=35488, avg=15151.27, stdev=5387.25 00:16:08.254 clat percentiles (usec): 00:16:08.254 | 1.00th=[ 8848], 5.00th=[10552], 10.00th=[11469], 20.00th=[12387], 00:16:08.254 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13304], 60.00th=[13698], 00:16:08.254 | 70.00th=[14091], 80.00th=[15139], 90.00th=[26084], 95.00th=[28967], 00:16:08.254 | 99.00th=[32113], 99.50th=[32375], 99.90th=[34341], 99.95th=[35390], 00:16:08.254 | 99.99th=[35390] 00:16:08.254 write: IOPS=4403, BW=17.2MiB/s (18.0MB/s)(17.3MiB/1003msec); 0 zone resets 00:16:08.254 slat (usec): min=11, max=8001, avg=113.21, stdev=619.75 00:16:08.254 clat (usec): min=494, max=30383, avg=14729.62, stdev=3661.12 00:16:08.254 lat (usec): min=5982, max=30407, avg=14842.82, stdev=3658.56 00:16:08.254 clat percentiles (usec): 00:16:08.254 | 1.00th=[ 8094], 5.00th=[ 9634], 10.00th=[12387], 20.00th=[13304], 00:16:08.254 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14353], 60.00th=[14615], 00:16:08.254 | 70.00th=[15008], 80.00th=[15270], 90.00th=[16188], 95.00th=[22676], 00:16:08.254 | 99.00th=[30016], 99.50th=[30278], 99.90th=[30278], 99.95th=[30278], 00:16:08.254 | 99.99th=[30278] 00:16:08.254 bw ( KiB/s): min=14192, max=20160, per=34.85%, avg=17176.00, stdev=4220.01, samples=2 00:16:08.254 iops : min= 3548, max= 5040, avg=4294.00, stdev=1055.00, samples=2 00:16:08.254 lat (usec) : 500=0.01% 00:16:08.254 lat (msec) : 10=4.17%, 20=86.89%, 50=8.93% 00:16:08.254 cpu : usr=4.29%, sys=12.08%, ctx=459, majf=0, minf=12 00:16:08.254 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:08.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.254 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:08.254 issued rwts: total=4096,4417,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.254 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:08.254 job2: (groupid=0, jobs=1): err= 0: pid=87112: Mon Nov 4 07:22:09 2024 00:16:08.254 read: IOPS=3190, BW=12.5MiB/s (13.1MB/s)(12.5MiB/1002msec) 00:16:08.254 slat (usec): min=9, max=6111, avg=137.56, stdev=735.99 00:16:08.254 clat (usec): min=1404, max=26015, avg=18177.90, stdev=3086.75 00:16:08.254 lat (usec): min=4687, max=26072, avg=18315.46, stdev=3108.33 00:16:08.254 clat percentiles (usec): 00:16:08.254 | 1.00th=[ 5407], 5.00th=[13042], 10.00th=[14091], 20.00th=[15270], 00:16:08.254 | 30.00th=[18220], 40.00th=[18744], 50.00th=[19006], 60.00th=[19530], 00:16:08.254 | 70.00th=[19792], 80.00th=[20055], 90.00th=[20841], 95.00th=[21890], 00:16:08.254 | 99.00th=[23725], 99.50th=[24773], 99.90th=[25560], 99.95th=[25822], 00:16:08.254 | 99.99th=[26084] 00:16:08.254 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:16:08.254 slat (usec): min=11, max=6900, avg=147.30, stdev=706.11 00:16:08.254 clat (usec): min=10690, max=26291, avg=19013.55, stdev=3042.06 00:16:08.254 lat (usec): min=10714, max=26341, avg=19160.85, stdev=3011.01 00:16:08.254 clat percentiles (usec): 00:16:08.254 | 1.00th=[11731], 5.00th=[14746], 10.00th=[15401], 20.00th=[15926], 00:16:08.254 | 30.00th=[16581], 40.00th=[17957], 50.00th=[20055], 60.00th=[20579], 00:16:08.254 | 70.00th=[21103], 80.00th=[21890], 90.00th=[22676], 95.00th=[23200], 00:16:08.254 | 99.00th=[23725], 99.50th=[24773], 99.90th=[26084], 99.95th=[26084], 00:16:08.254 | 99.99th=[26346] 00:16:08.254 bw ( KiB/s): min=12376, max=16280, per=29.07%, avg=14328.00, stdev=2760.54, samples=2 00:16:08.254 iops : min= 3094, max= 4070, avg=3582.00, stdev=690.14, samples=2 00:16:08.254 lat (msec) : 2=0.01%, 10=0.97%, 20=61.19%, 50=37.83% 00:16:08.254 cpu : usr=3.80%, sys=11.19%, ctx=393, majf=0, minf=9 00:16:08.254 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:16:08.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.254 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:08.254 issued rwts: total=3197,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.254 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:08.254 job3: (groupid=0, jobs=1): err= 0: pid=87113: Mon Nov 4 07:22:09 2024 00:16:08.254 read: IOPS=2008, BW=8036KiB/s (8229kB/s)(8068KiB/1004msec) 00:16:08.254 slat (usec): min=7, max=12362, avg=265.88, stdev=1067.27 00:16:08.254 clat (usec): min=646, max=60772, avg=32916.94, stdev=9643.07 00:16:08.254 lat (usec): min=7258, max=60798, avg=33182.81, stdev=9644.37 00:16:08.254 clat percentiles (usec): 00:16:08.255 | 1.00th=[ 7767], 5.00th=[24511], 10.00th=[27395], 20.00th=[28181], 00:16:08.255 | 30.00th=[28967], 40.00th=[28967], 50.00th=[29492], 60.00th=[29754], 00:16:08.255 | 70.00th=[30278], 80.00th=[35914], 90.00th=[51119], 95.00th=[53216], 00:16:08.255 | 99.00th=[57410], 99.50th=[57410], 99.90th=[60556], 99.95th=[60556], 00:16:08.255 | 99.99th=[60556] 00:16:08.255 write: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec); 0 zone resets 00:16:08.255 slat (usec): min=15, max=7168, avg=216.97, stdev=1026.12 00:16:08.255 clat (usec): min=20734, max=40303, avg=29225.21, stdev=2255.44 00:16:08.255 lat (usec): min=20767, max=40331, avg=29442.18, stdev=2059.69 00:16:08.255 clat percentiles (usec): 00:16:08.255 | 1.00th=[22414], 5.00th=[26870], 10.00th=[27395], 20.00th=[27919], 00:16:08.255 | 30.00th=[28181], 40.00th=[28967], 50.00th=[28967], 60.00th=[29230], 00:16:08.255 | 70.00th=[29754], 80.00th=[30278], 90.00th=[30802], 95.00th=[33424], 00:16:08.255 | 99.00th=[38536], 99.50th=[38536], 99.90th=[40109], 99.95th=[40109], 00:16:08.255 | 99.99th=[40109] 00:16:08.255 bw ( KiB/s): min= 7888, max= 8496, per=16.62%, avg=8192.00, stdev=429.92, samples=2 00:16:08.255 iops : min= 1972, max= 2124, avg=2048.00, stdev=107.48, samples=2 00:16:08.255 lat (usec) : 750=0.02% 00:16:08.255 lat (msec) : 10=0.79%, 50=92.99%, 100=6.20% 00:16:08.255 cpu : usr=2.69%, sys=7.18%, ctx=194, majf=0, minf=15 00:16:08.255 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:16:08.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:08.255 issued rwts: total=2017,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.255 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:08.255 00:16:08.255 Run status group 0 (all jobs): 00:16:08.255 READ: bw=44.2MiB/s (46.3MB/s), 8036KiB/s-16.0MiB/s (8229kB/s-16.7MB/s), io=44.4MiB (46.5MB), run=1002-1004msec 00:16:08.255 WRITE: bw=48.1MiB/s (50.5MB/s), 8159KiB/s-17.2MiB/s (8355kB/s-18.0MB/s), io=48.3MiB (50.7MB), run=1002-1004msec 00:16:08.255 00:16:08.255 Disk stats (read/write): 00:16:08.255 nvme0n1: ios=1768/2048, merge=0/0, ticks=12440/13361, in_queue=25801, util=88.47% 00:16:08.255 nvme0n2: ios=3815/4096, merge=0/0, ticks=23456/24678, in_queue=48134, util=88.61% 00:16:08.255 nvme0n3: ios=2560/3031, merge=0/0, ticks=15594/17862, in_queue=33456, util=89.37% 00:16:08.255 nvme0n4: ios=1664/2048, merge=0/0, ticks=12309/13198, in_queue=25507, util=89.74% 00:16:08.255 07:22:09 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:08.255 [global] 00:16:08.255 thread=1 00:16:08.255 invalidate=1 00:16:08.255 rw=randwrite 00:16:08.255 time_based=1 00:16:08.255 runtime=1 00:16:08.255 ioengine=libaio 00:16:08.255 direct=1 00:16:08.255 bs=4096 00:16:08.255 iodepth=128 00:16:08.255 norandommap=0 00:16:08.255 numjobs=1 00:16:08.255 00:16:08.255 verify_dump=1 00:16:08.255 verify_backlog=512 00:16:08.255 verify_state_save=0 00:16:08.255 do_verify=1 00:16:08.255 verify=crc32c-intel 00:16:08.255 [job0] 00:16:08.255 filename=/dev/nvme0n1 00:16:08.255 [job1] 00:16:08.255 filename=/dev/nvme0n2 00:16:08.255 [job2] 00:16:08.255 filename=/dev/nvme0n3 00:16:08.255 [job3] 00:16:08.255 filename=/dev/nvme0n4 00:16:08.255 Could not set queue depth (nvme0n1) 00:16:08.255 Could not set queue depth (nvme0n2) 00:16:08.255 Could not set queue depth (nvme0n3) 00:16:08.255 Could not set queue depth (nvme0n4) 00:16:08.255 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:08.255 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:08.255 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:08.255 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:08.255 fio-3.35 00:16:08.255 Starting 4 threads 00:16:09.632 00:16:09.632 job0: (groupid=0, jobs=1): err= 0: pid=87172: Mon Nov 4 07:22:11 2024 00:16:09.632 read: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec) 00:16:09.632 slat (usec): min=6, max=19929, avg=193.30, stdev=1212.66 00:16:09.632 clat (usec): min=11667, max=53455, avg=23105.58, stdev=7630.40 00:16:09.632 lat (usec): min=11693, max=54645, avg=23298.88, stdev=7754.51 00:16:09.632 clat percentiles (usec): 00:16:09.632 | 1.00th=[12387], 5.00th=[14484], 10.00th=[15270], 20.00th=[16450], 00:16:09.632 | 30.00th=[17695], 40.00th=[20055], 50.00th=[22676], 60.00th=[24249], 00:16:09.632 | 70.00th=[25035], 80.00th=[25822], 90.00th=[33817], 95.00th=[41157], 00:16:09.632 | 99.00th=[44827], 99.50th=[49021], 99.90th=[53216], 99.95th=[53216], 00:16:09.632 | 99.99th=[53216] 00:16:09.632 write: IOPS=2401, BW=9606KiB/s (9836kB/s)(9692KiB/1009msec); 0 zone resets 00:16:09.632 slat (usec): min=11, max=13328, avg=240.38, stdev=1138.91 00:16:09.632 clat (usec): min=8484, max=78680, avg=32949.63, stdev=16074.11 00:16:09.632 lat (usec): min=9235, max=78716, avg=33190.01, stdev=16188.28 00:16:09.633 clat percentiles (usec): 00:16:09.633 | 1.00th=[14615], 5.00th=[16581], 10.00th=[17433], 20.00th=[22676], 00:16:09.633 | 30.00th=[25035], 40.00th=[25822], 50.00th=[27132], 60.00th=[28443], 00:16:09.633 | 70.00th=[30802], 80.00th=[46400], 90.00th=[63177], 95.00th=[67634], 00:16:09.633 | 99.00th=[69731], 99.50th=[71828], 99.90th=[79168], 99.95th=[79168], 00:16:09.633 | 99.99th=[79168] 00:16:09.633 bw ( KiB/s): min= 9016, max= 9352, per=21.54%, avg=9184.00, stdev=237.59, samples=2 00:16:09.633 iops : min= 2254, max= 2338, avg=2296.00, stdev=59.40, samples=2 00:16:09.633 lat (msec) : 10=0.20%, 20=27.22%, 50=61.62%, 100=10.96% 00:16:09.633 cpu : usr=2.08%, sys=7.94%, ctx=247, majf=0, minf=12 00:16:09.633 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:16:09.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:09.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:09.633 issued rwts: total=2048,2423,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:09.633 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:09.633 job1: (groupid=0, jobs=1): err= 0: pid=87173: Mon Nov 4 07:22:11 2024 00:16:09.633 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:16:09.633 slat (usec): min=7, max=5732, avg=91.29, stdev=472.73 00:16:09.633 clat (usec): min=7800, max=18703, avg=11944.67, stdev=1169.20 00:16:09.633 lat (usec): min=7829, max=19715, avg=12035.96, stdev=1189.99 00:16:09.633 clat percentiles (usec): 00:16:09.633 | 1.00th=[ 8979], 5.00th=[ 9896], 10.00th=[10683], 20.00th=[11338], 00:16:09.633 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11863], 60.00th=[12125], 00:16:09.633 | 70.00th=[12256], 80.00th=[12518], 90.00th=[13304], 95.00th=[13960], 00:16:09.633 | 99.00th=[15533], 99.50th=[16188], 99.90th=[17171], 99.95th=[18220], 00:16:09.633 | 99.99th=[18744] 00:16:09.633 write: IOPS=5253, BW=20.5MiB/s (21.5MB/s)(20.5MiB/1001msec); 0 zone resets 00:16:09.633 slat (usec): min=12, max=6212, avg=93.37, stdev=451.92 00:16:09.633 clat (usec): min=611, max=20086, avg=12420.79, stdev=1642.91 00:16:09.633 lat (usec): min=631, max=20141, avg=12514.17, stdev=1638.25 00:16:09.633 clat percentiles (usec): 00:16:09.633 | 1.00th=[ 6521], 5.00th=[ 9896], 10.00th=[10814], 20.00th=[11731], 00:16:09.633 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12518], 60.00th=[12780], 00:16:09.633 | 70.00th=[13042], 80.00th=[13435], 90.00th=[13960], 95.00th=[14746], 00:16:09.633 | 99.00th=[15926], 99.50th=[16581], 99.90th=[17957], 99.95th=[18482], 00:16:09.633 | 99.99th=[20055] 00:16:09.633 bw ( KiB/s): min=20521, max=20576, per=48.20%, avg=20548.50, stdev=38.89, samples=2 00:16:09.633 iops : min= 5130, max= 5144, avg=5137.00, stdev= 9.90, samples=2 00:16:09.633 lat (usec) : 750=0.03%, 1000=0.01% 00:16:09.633 lat (msec) : 4=0.34%, 10=5.37%, 20=94.25%, 50=0.01% 00:16:09.633 cpu : usr=4.80%, sys=14.00%, ctx=600, majf=0, minf=5 00:16:09.633 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:09.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:09.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:09.633 issued rwts: total=5120,5259,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:09.633 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:09.633 job2: (groupid=0, jobs=1): err= 0: pid=87174: Mon Nov 4 07:22:11 2024 00:16:09.633 read: IOPS=1140, BW=4560KiB/s (4670kB/s)(4592KiB/1007msec) 00:16:09.633 slat (usec): min=5, max=18537, avg=361.84, stdev=1875.92 00:16:09.633 clat (usec): min=3277, max=67421, avg=44072.21, stdev=12330.31 00:16:09.633 lat (usec): min=8384, max=67437, avg=44434.05, stdev=12273.59 00:16:09.633 clat percentiles (usec): 00:16:09.633 | 1.00th=[ 8586], 5.00th=[17957], 10.00th=[35390], 20.00th=[35914], 00:16:09.633 | 30.00th=[36963], 40.00th=[38536], 50.00th=[43779], 60.00th=[48497], 00:16:09.633 | 70.00th=[50070], 80.00th=[56361], 90.00th=[60556], 95.00th=[62129], 00:16:09.633 | 99.00th=[66323], 99.50th=[66847], 99.90th=[67634], 99.95th=[67634], 00:16:09.633 | 99.99th=[67634] 00:16:09.633 write: IOPS=1525, BW=6101KiB/s (6248kB/s)(6144KiB/1007msec); 0 zone resets 00:16:09.633 slat (usec): min=17, max=15049, avg=374.66, stdev=1643.60 00:16:09.633 clat (usec): min=25496, max=81641, avg=48704.23, stdev=13007.83 00:16:09.633 lat (usec): min=32867, max=81668, avg=49078.89, stdev=12997.62 00:16:09.633 clat percentiles (usec): 00:16:09.633 | 1.00th=[27919], 5.00th=[32900], 10.00th=[33162], 20.00th=[34866], 00:16:09.633 | 30.00th=[35914], 40.00th=[45876], 50.00th=[49546], 60.00th=[52691], 00:16:09.633 | 70.00th=[54789], 80.00th=[57410], 90.00th=[65799], 95.00th=[73925], 00:16:09.633 | 99.00th=[81265], 99.50th=[81265], 99.90th=[81265], 99.95th=[81265], 00:16:09.633 | 99.99th=[81265] 00:16:09.633 bw ( KiB/s): min= 5896, max= 6360, per=14.37%, avg=6128.00, stdev=328.10, samples=2 00:16:09.633 iops : min= 1474, max= 1590, avg=1532.00, stdev=82.02, samples=2 00:16:09.633 lat (msec) : 4=0.04%, 10=1.01%, 20=1.19%, 50=56.93%, 100=40.83% 00:16:09.633 cpu : usr=0.70%, sys=5.37%, ctx=267, majf=0, minf=11 00:16:09.633 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:16:09.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:09.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:09.633 issued rwts: total=1148,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:09.633 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:09.633 job3: (groupid=0, jobs=1): err= 0: pid=87175: Mon Nov 4 07:22:11 2024 00:16:09.633 read: IOPS=1138, BW=4555KiB/s (4664kB/s)(4596KiB/1009msec) 00:16:09.633 slat (usec): min=5, max=18219, avg=363.40, stdev=1791.62 00:16:09.633 clat (usec): min=3409, max=65749, avg=44682.04, stdev=12155.13 00:16:09.633 lat (usec): min=9077, max=66770, avg=45045.44, stdev=12112.89 00:16:09.633 clat percentiles (usec): 00:16:09.633 | 1.00th=[ 9241], 5.00th=[18482], 10.00th=[34341], 20.00th=[36439], 00:16:09.633 | 30.00th=[37487], 40.00th=[41157], 50.00th=[44303], 60.00th=[48497], 00:16:09.633 | 70.00th=[50070], 80.00th=[56886], 90.00th=[60556], 95.00th=[62129], 00:16:09.633 | 99.00th=[65274], 99.50th=[65799], 99.90th=[65799], 99.95th=[65799], 00:16:09.633 | 99.99th=[65799] 00:16:09.633 write: IOPS=1522, BW=6089KiB/s (6235kB/s)(6144KiB/1009msec); 0 zone resets 00:16:09.633 slat (usec): min=16, max=14791, avg=373.83, stdev=1565.15 00:16:09.633 clat (usec): min=25203, max=81398, avg=48326.05, stdev=12929.25 00:16:09.633 lat (usec): min=32793, max=81523, avg=48699.89, stdev=12924.85 00:16:09.633 clat percentiles (usec): 00:16:09.633 | 1.00th=[29230], 5.00th=[32900], 10.00th=[33424], 20.00th=[34866], 00:16:09.633 | 30.00th=[35914], 40.00th=[43254], 50.00th=[47449], 60.00th=[52167], 00:16:09.633 | 70.00th=[53740], 80.00th=[57410], 90.00th=[65274], 95.00th=[74974], 00:16:09.633 | 99.00th=[81265], 99.50th=[81265], 99.90th=[81265], 99.95th=[81265], 00:16:09.633 | 99.99th=[81265] 00:16:09.633 bw ( KiB/s): min= 5651, max= 6624, per=14.40%, avg=6137.50, stdev=688.01, samples=2 00:16:09.633 iops : min= 1412, max= 1656, avg=1534.00, stdev=172.53, samples=2 00:16:09.633 lat (msec) : 4=0.04%, 10=1.04%, 20=1.19%, 50=57.62%, 100=40.11% 00:16:09.633 cpu : usr=1.79%, sys=4.66%, ctx=289, majf=0, minf=17 00:16:09.633 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:16:09.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:09.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:09.633 issued rwts: total=1149,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:09.633 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:09.633 00:16:09.633 Run status group 0 (all jobs): 00:16:09.633 READ: bw=36.6MiB/s (38.4MB/s), 4555KiB/s-20.0MiB/s (4664kB/s-20.9MB/s), io=37.0MiB (38.8MB), run=1001-1009msec 00:16:09.633 WRITE: bw=41.6MiB/s (43.7MB/s), 6089KiB/s-20.5MiB/s (6235kB/s-21.5MB/s), io=42.0MiB (44.0MB), run=1001-1009msec 00:16:09.633 00:16:09.633 Disk stats (read/write): 00:16:09.633 nvme0n1: ios=1648/2048, merge=0/0, ticks=19369/32394, in_queue=51763, util=88.37% 00:16:09.633 nvme0n2: ios=4327/4608, merge=0/0, ticks=18800/20580, in_queue=39380, util=89.47% 00:16:09.633 nvme0n3: ios=1045/1339, merge=0/0, ticks=11195/14097, in_queue=25292, util=89.59% 00:16:09.633 nvme0n4: ios=1024/1316, merge=0/0, ticks=11610/13977, in_queue=25587, util=89.62% 00:16:09.633 07:22:11 -- target/fio.sh@55 -- # sync 00:16:09.633 07:22:11 -- target/fio.sh@59 -- # fio_pid=87191 00:16:09.633 07:22:11 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:09.633 07:22:11 -- target/fio.sh@61 -- # sleep 3 00:16:09.633 [global] 00:16:09.633 thread=1 00:16:09.633 invalidate=1 00:16:09.633 rw=read 00:16:09.633 time_based=1 00:16:09.633 runtime=10 00:16:09.633 ioengine=libaio 00:16:09.633 direct=1 00:16:09.633 bs=4096 00:16:09.633 iodepth=1 00:16:09.633 norandommap=1 00:16:09.633 numjobs=1 00:16:09.633 00:16:09.633 [job0] 00:16:09.633 filename=/dev/nvme0n1 00:16:09.633 [job1] 00:16:09.633 filename=/dev/nvme0n2 00:16:09.633 [job2] 00:16:09.633 filename=/dev/nvme0n3 00:16:09.633 [job3] 00:16:09.633 filename=/dev/nvme0n4 00:16:09.633 Could not set queue depth (nvme0n1) 00:16:09.633 Could not set queue depth (nvme0n2) 00:16:09.633 Could not set queue depth (nvme0n3) 00:16:09.633 Could not set queue depth (nvme0n4) 00:16:09.633 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:09.633 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:09.633 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:09.633 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:09.633 fio-3.35 00:16:09.633 Starting 4 threads 00:16:12.919 07:22:14 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:12.919 fio: pid=87234, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:12.919 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=32763904, buflen=4096 00:16:12.919 07:22:14 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:12.919 fio: pid=87233, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:12.919 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=37683200, buflen=4096 00:16:12.919 07:22:14 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:12.919 07:22:14 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:13.178 fio: pid=87231, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:13.178 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=31264768, buflen=4096 00:16:13.178 07:22:14 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:13.178 07:22:14 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:13.438 fio: pid=87232, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:13.438 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=45912064, buflen=4096 00:16:13.438 00:16:13.438 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87231: Mon Nov 4 07:22:15 2024 00:16:13.438 read: IOPS=2272, BW=9087KiB/s (9305kB/s)(29.8MiB/3360msec) 00:16:13.438 slat (usec): min=6, max=13802, avg=25.12, stdev=226.69 00:16:13.438 clat (usec): min=140, max=3903, avg=412.76, stdev=165.63 00:16:13.438 lat (usec): min=150, max=14012, avg=437.89, stdev=281.22 00:16:13.438 clat percentiles (usec): 00:16:13.438 | 1.00th=[ 157], 5.00th=[ 167], 10.00th=[ 182], 20.00th=[ 265], 00:16:13.438 | 30.00th=[ 363], 40.00th=[ 383], 50.00th=[ 400], 60.00th=[ 424], 00:16:13.438 | 70.00th=[ 502], 80.00th=[ 537], 90.00th=[ 586], 95.00th=[ 635], 00:16:13.438 | 99.00th=[ 783], 99.50th=[ 955], 99.90th=[ 1319], 99.95th=[ 1696], 00:16:13.438 | 99.99th=[ 3916] 00:16:13.438 bw ( KiB/s): min= 6184, max= 9784, per=20.33%, avg=8136.00, stdev=1702.09, samples=6 00:16:13.438 iops : min= 1546, max= 2446, avg=2034.00, stdev=425.52, samples=6 00:16:13.438 lat (usec) : 250=18.25%, 500=51.70%, 750=28.58%, 1000=1.06% 00:16:13.438 lat (msec) : 2=0.35%, 4=0.04% 00:16:13.438 cpu : usr=0.95%, sys=4.02%, ctx=7706, majf=0, minf=1 00:16:13.438 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:13.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.438 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.438 issued rwts: total=7634,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.438 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:13.438 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87232: Mon Nov 4 07:22:15 2024 00:16:13.438 read: IOPS=3111, BW=12.2MiB/s (12.7MB/s)(43.8MiB/3603msec) 00:16:13.438 slat (usec): min=7, max=16309, avg=19.37, stdev=237.64 00:16:13.438 clat (usec): min=116, max=4032, avg=300.46, stdev=116.01 00:16:13.438 lat (usec): min=127, max=16478, avg=319.82, stdev=262.35 00:16:13.438 clat percentiles (usec): 00:16:13.438 | 1.00th=[ 131], 5.00th=[ 141], 10.00th=[ 151], 20.00th=[ 182], 00:16:13.438 | 30.00th=[ 229], 40.00th=[ 289], 50.00th=[ 330], 60.00th=[ 351], 00:16:13.438 | 70.00th=[ 367], 80.00th=[ 383], 90.00th=[ 408], 95.00th=[ 429], 00:16:13.438 | 99.00th=[ 478], 99.50th=[ 519], 99.90th=[ 963], 99.95th=[ 1483], 00:16:13.438 | 99.99th=[ 3556] 00:16:13.438 bw ( KiB/s): min= 9720, max=14424, per=27.62%, avg=11052.00, stdev=1777.01, samples=6 00:16:13.438 iops : min= 2430, max= 3606, avg=2763.00, stdev=444.25, samples=6 00:16:13.438 lat (usec) : 250=35.20%, 500=64.11%, 750=0.48%, 1000=0.12% 00:16:13.438 lat (msec) : 2=0.04%, 4=0.03%, 10=0.01% 00:16:13.438 cpu : usr=0.94%, sys=3.86%, ctx=11226, majf=0, minf=2 00:16:13.438 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:13.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.438 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.438 issued rwts: total=11210,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.438 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:13.438 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87233: Mon Nov 4 07:22:15 2024 00:16:13.438 read: IOPS=2921, BW=11.4MiB/s (12.0MB/s)(35.9MiB/3149msec) 00:16:13.438 slat (usec): min=6, max=9744, avg=17.25, stdev=126.89 00:16:13.438 clat (usec): min=144, max=2089, avg=323.28, stdev=91.89 00:16:13.438 lat (usec): min=159, max=10108, avg=340.53, stdev=156.26 00:16:13.438 clat percentiles (usec): 00:16:13.438 | 1.00th=[ 176], 5.00th=[ 194], 10.00th=[ 206], 20.00th=[ 225], 00:16:13.438 | 30.00th=[ 260], 40.00th=[ 310], 50.00th=[ 338], 60.00th=[ 359], 00:16:13.438 | 70.00th=[ 379], 80.00th=[ 396], 90.00th=[ 416], 95.00th=[ 437], 00:16:13.438 | 99.00th=[ 506], 99.50th=[ 619], 99.90th=[ 971], 99.95th=[ 1090], 00:16:13.438 | 99.99th=[ 2089] 00:16:13.438 bw ( KiB/s): min= 9648, max=13688, per=28.73%, avg=11494.67, stdev=1884.36, samples=6 00:16:13.438 iops : min= 2412, max= 3422, avg=2873.67, stdev=471.09, samples=6 00:16:13.438 lat (usec) : 250=28.08%, 500=70.79%, 750=0.82%, 1000=0.22% 00:16:13.438 lat (msec) : 2=0.08%, 4=0.01% 00:16:13.438 cpu : usr=1.02%, sys=3.72%, ctx=9262, majf=0, minf=1 00:16:13.438 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:13.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.438 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.438 issued rwts: total=9201,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.438 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:13.438 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87234: Mon Nov 4 07:22:15 2024 00:16:13.438 read: IOPS=2730, BW=10.7MiB/s (11.2MB/s)(31.2MiB/2930msec) 00:16:13.438 slat (usec): min=8, max=150, avg=14.29, stdev= 5.16 00:16:13.438 clat (usec): min=150, max=8109, avg=350.31, stdev=121.92 00:16:13.438 lat (usec): min=170, max=8125, avg=364.60, stdev=121.25 00:16:13.438 clat percentiles (usec): 00:16:13.438 | 1.00th=[ 190], 5.00th=[ 215], 10.00th=[ 235], 20.00th=[ 314], 00:16:13.438 | 30.00th=[ 334], 40.00th=[ 347], 50.00th=[ 359], 60.00th=[ 367], 00:16:13.438 | 70.00th=[ 383], 80.00th=[ 396], 90.00th=[ 416], 95.00th=[ 433], 00:16:13.438 | 99.00th=[ 494], 99.50th=[ 553], 99.90th=[ 1057], 99.95th=[ 2245], 00:16:13.438 | 99.99th=[ 8094] 00:16:13.438 bw ( KiB/s): min= 9784, max=14032, per=27.97%, avg=11193.60, stdev=1730.70, samples=5 00:16:13.438 iops : min= 2446, max= 3508, avg=2798.40, stdev=432.67, samples=5 00:16:13.438 lat (usec) : 250=13.34%, 500=85.74%, 750=0.62%, 1000=0.16% 00:16:13.438 lat (msec) : 2=0.07%, 4=0.04%, 10=0.01% 00:16:13.438 cpu : usr=0.82%, sys=3.52%, ctx=8004, majf=0, minf=2 00:16:13.438 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:13.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.438 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.438 issued rwts: total=8000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.438 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:13.438 00:16:13.438 Run status group 0 (all jobs): 00:16:13.438 READ: bw=39.1MiB/s (41.0MB/s), 9087KiB/s-12.2MiB/s (9305kB/s-12.7MB/s), io=141MiB (148MB), run=2930-3603msec 00:16:13.438 00:16:13.438 Disk stats (read/write): 00:16:13.438 nvme0n1: ios=6522/0, merge=0/0, ticks=2923/0, in_queue=2923, util=95.47% 00:16:13.438 nvme0n2: ios=9901/0, merge=0/0, ticks=3161/0, in_queue=3161, util=95.53% 00:16:13.439 nvme0n3: ios=9082/0, merge=0/0, ticks=2943/0, in_queue=2943, util=96.34% 00:16:13.439 nvme0n4: ios=7854/0, merge=0/0, ticks=2703/0, in_queue=2703, util=96.49% 00:16:13.439 07:22:15 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:13.439 07:22:15 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:13.697 07:22:15 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:13.697 07:22:15 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:13.956 07:22:15 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:13.956 07:22:15 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:14.215 07:22:15 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:14.215 07:22:15 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:14.474 07:22:16 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:14.474 07:22:16 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:14.733 07:22:16 -- target/fio.sh@69 -- # fio_status=0 00:16:14.733 07:22:16 -- target/fio.sh@70 -- # wait 87191 00:16:14.733 07:22:16 -- target/fio.sh@70 -- # fio_status=4 00:16:14.733 07:22:16 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:14.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.733 07:22:16 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:14.733 07:22:16 -- common/autotest_common.sh@1198 -- # local i=0 00:16:14.733 07:22:16 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:16:14.733 07:22:16 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:14.733 07:22:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:14.733 07:22:16 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:14.733 nvmf hotplug test: fio failed as expected 00:16:14.733 07:22:16 -- common/autotest_common.sh@1210 -- # return 0 00:16:14.733 07:22:16 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:14.733 07:22:16 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:14.733 07:22:16 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:14.992 07:22:16 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:14.992 07:22:16 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:14.992 07:22:16 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:14.992 07:22:16 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:14.992 07:22:16 -- target/fio.sh@91 -- # nvmftestfini 00:16:14.992 07:22:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:14.992 07:22:16 -- nvmf/common.sh@116 -- # sync 00:16:14.992 07:22:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:14.992 07:22:16 -- nvmf/common.sh@119 -- # set +e 00:16:14.992 07:22:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:14.992 07:22:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:14.992 rmmod nvme_tcp 00:16:14.992 rmmod nvme_fabrics 00:16:14.992 rmmod nvme_keyring 00:16:15.251 07:22:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:15.251 07:22:16 -- nvmf/common.sh@123 -- # set -e 00:16:15.251 07:22:16 -- nvmf/common.sh@124 -- # return 0 00:16:15.251 07:22:16 -- nvmf/common.sh@477 -- # '[' -n 86697 ']' 00:16:15.251 07:22:16 -- nvmf/common.sh@478 -- # killprocess 86697 00:16:15.251 07:22:16 -- common/autotest_common.sh@926 -- # '[' -z 86697 ']' 00:16:15.251 07:22:16 -- common/autotest_common.sh@930 -- # kill -0 86697 00:16:15.251 07:22:16 -- common/autotest_common.sh@931 -- # uname 00:16:15.251 07:22:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:15.251 07:22:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 86697 00:16:15.251 killing process with pid 86697 00:16:15.251 07:22:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:15.251 07:22:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:15.251 07:22:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 86697' 00:16:15.251 07:22:16 -- common/autotest_common.sh@945 -- # kill 86697 00:16:15.251 07:22:16 -- common/autotest_common.sh@950 -- # wait 86697 00:16:15.251 07:22:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:15.251 07:22:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:15.251 07:22:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:15.251 07:22:17 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:15.251 07:22:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:15.251 07:22:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.251 07:22:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:15.251 07:22:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.510 07:22:17 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:15.510 00:16:15.510 real 0m19.408s 00:16:15.510 user 1m15.652s 00:16:15.510 sys 0m7.280s 00:16:15.510 07:22:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:15.510 07:22:17 -- common/autotest_common.sh@10 -- # set +x 00:16:15.510 ************************************ 00:16:15.510 END TEST nvmf_fio_target 00:16:15.510 ************************************ 00:16:15.510 07:22:17 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:15.510 07:22:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:15.510 07:22:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:15.510 07:22:17 -- common/autotest_common.sh@10 -- # set +x 00:16:15.510 ************************************ 00:16:15.510 START TEST nvmf_bdevio 00:16:15.510 ************************************ 00:16:15.510 07:22:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:15.510 * Looking for test storage... 00:16:15.510 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:15.510 07:22:17 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:15.510 07:22:17 -- nvmf/common.sh@7 -- # uname -s 00:16:15.510 07:22:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:15.510 07:22:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:15.510 07:22:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:15.510 07:22:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:15.510 07:22:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:15.510 07:22:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:15.510 07:22:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:15.510 07:22:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:15.510 07:22:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:15.510 07:22:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:15.510 07:22:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:16:15.510 07:22:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:16:15.510 07:22:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:15.510 07:22:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:15.510 07:22:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:15.510 07:22:17 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:15.510 07:22:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:15.510 07:22:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:15.510 07:22:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:15.510 07:22:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.510 07:22:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.510 07:22:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.510 07:22:17 -- paths/export.sh@5 -- # export PATH 00:16:15.510 07:22:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.510 07:22:17 -- nvmf/common.sh@46 -- # : 0 00:16:15.510 07:22:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:15.510 07:22:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:15.510 07:22:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:15.510 07:22:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:15.510 07:22:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:15.510 07:22:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:15.510 07:22:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:15.510 07:22:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:15.510 07:22:17 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:15.510 07:22:17 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:15.510 07:22:17 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:15.510 07:22:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:15.510 07:22:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:15.510 07:22:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:15.510 07:22:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:15.510 07:22:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:15.510 07:22:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.510 07:22:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:15.510 07:22:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.510 07:22:17 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:15.510 07:22:17 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:15.510 07:22:17 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:15.510 07:22:17 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:15.510 07:22:17 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:15.510 07:22:17 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:15.510 07:22:17 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:15.510 07:22:17 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:15.510 07:22:17 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:15.510 07:22:17 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:15.510 07:22:17 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:15.510 07:22:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:15.510 07:22:17 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:15.510 07:22:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:15.510 07:22:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:15.510 07:22:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:15.510 07:22:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:15.510 07:22:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:15.510 07:22:17 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:15.510 07:22:17 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:15.510 Cannot find device "nvmf_tgt_br" 00:16:15.510 07:22:17 -- nvmf/common.sh@154 -- # true 00:16:15.510 07:22:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:15.511 Cannot find device "nvmf_tgt_br2" 00:16:15.511 07:22:17 -- nvmf/common.sh@155 -- # true 00:16:15.511 07:22:17 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:15.511 07:22:17 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:15.511 Cannot find device "nvmf_tgt_br" 00:16:15.511 07:22:17 -- nvmf/common.sh@157 -- # true 00:16:15.511 07:22:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:15.511 Cannot find device "nvmf_tgt_br2" 00:16:15.511 07:22:17 -- nvmf/common.sh@158 -- # true 00:16:15.511 07:22:17 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:15.770 07:22:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:15.770 07:22:17 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:15.770 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:15.770 07:22:17 -- nvmf/common.sh@161 -- # true 00:16:15.770 07:22:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:15.770 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:15.770 07:22:17 -- nvmf/common.sh@162 -- # true 00:16:15.770 07:22:17 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:15.770 07:22:17 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:15.770 07:22:17 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:15.770 07:22:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:15.770 07:22:17 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:15.770 07:22:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:15.770 07:22:17 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:15.770 07:22:17 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:15.770 07:22:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:15.770 07:22:17 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:15.770 07:22:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:15.770 07:22:17 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:15.770 07:22:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:15.770 07:22:17 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:15.770 07:22:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:15.770 07:22:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:15.770 07:22:17 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:15.770 07:22:17 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:15.770 07:22:17 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:15.770 07:22:17 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:15.770 07:22:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:15.770 07:22:17 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:15.770 07:22:17 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:15.770 07:22:17 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:15.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:15.770 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:16:15.770 00:16:15.770 --- 10.0.0.2 ping statistics --- 00:16:15.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.770 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:16:15.770 07:22:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:15.770 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:15.770 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:16:15.770 00:16:15.770 --- 10.0.0.3 ping statistics --- 00:16:15.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.770 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:16:15.770 07:22:17 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:15.770 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:15.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:15.770 00:16:15.770 --- 10.0.0.1 ping statistics --- 00:16:15.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.770 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:15.770 07:22:17 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:15.770 07:22:17 -- nvmf/common.sh@421 -- # return 0 00:16:15.770 07:22:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:15.770 07:22:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:15.770 07:22:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:15.770 07:22:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:15.770 07:22:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:15.770 07:22:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:15.770 07:22:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:16.029 07:22:17 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:16.029 07:22:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:16.029 07:22:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:16.029 07:22:17 -- common/autotest_common.sh@10 -- # set +x 00:16:16.029 07:22:17 -- nvmf/common.sh@469 -- # nvmfpid=87554 00:16:16.029 07:22:17 -- nvmf/common.sh@470 -- # waitforlisten 87554 00:16:16.029 07:22:17 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:16.029 07:22:17 -- common/autotest_common.sh@819 -- # '[' -z 87554 ']' 00:16:16.029 07:22:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.029 07:22:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:16.029 07:22:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.030 07:22:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:16.030 07:22:17 -- common/autotest_common.sh@10 -- # set +x 00:16:16.030 [2024-11-04 07:22:17.673851] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:16.030 [2024-11-04 07:22:17.673957] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:16.030 [2024-11-04 07:22:17.816455] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:16.288 [2024-11-04 07:22:17.883654] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:16.288 [2024-11-04 07:22:17.883793] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:16.288 [2024-11-04 07:22:17.883806] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:16.289 [2024-11-04 07:22:17.883814] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:16.289 [2024-11-04 07:22:17.884010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:16.289 [2024-11-04 07:22:17.884443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:16.289 [2024-11-04 07:22:17.884572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:16.289 [2024-11-04 07:22:17.884579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:17.225 07:22:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:17.225 07:22:18 -- common/autotest_common.sh@852 -- # return 0 00:16:17.225 07:22:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:17.225 07:22:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:17.225 07:22:18 -- common/autotest_common.sh@10 -- # set +x 00:16:17.225 07:22:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:17.225 07:22:18 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:17.225 07:22:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.225 07:22:18 -- common/autotest_common.sh@10 -- # set +x 00:16:17.225 [2024-11-04 07:22:18.762857] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:17.225 07:22:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.225 07:22:18 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:17.225 07:22:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.225 07:22:18 -- common/autotest_common.sh@10 -- # set +x 00:16:17.225 Malloc0 00:16:17.225 07:22:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.225 07:22:18 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:17.225 07:22:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.225 07:22:18 -- common/autotest_common.sh@10 -- # set +x 00:16:17.225 07:22:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.225 07:22:18 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:17.225 07:22:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.225 07:22:18 -- common/autotest_common.sh@10 -- # set +x 00:16:17.225 07:22:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.225 07:22:18 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:17.225 07:22:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.225 07:22:18 -- common/autotest_common.sh@10 -- # set +x 00:16:17.225 [2024-11-04 07:22:18.830372] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:17.225 07:22:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.225 07:22:18 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:17.225 07:22:18 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:17.225 07:22:18 -- nvmf/common.sh@520 -- # config=() 00:16:17.225 07:22:18 -- nvmf/common.sh@520 -- # local subsystem config 00:16:17.226 07:22:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:17.226 07:22:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:17.226 { 00:16:17.226 "params": { 00:16:17.226 "name": "Nvme$subsystem", 00:16:17.226 "trtype": "$TEST_TRANSPORT", 00:16:17.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:17.226 "adrfam": "ipv4", 00:16:17.226 "trsvcid": "$NVMF_PORT", 00:16:17.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:17.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:17.226 "hdgst": ${hdgst:-false}, 00:16:17.226 "ddgst": ${ddgst:-false} 00:16:17.226 }, 00:16:17.226 "method": "bdev_nvme_attach_controller" 00:16:17.226 } 00:16:17.226 EOF 00:16:17.226 )") 00:16:17.226 07:22:18 -- nvmf/common.sh@542 -- # cat 00:16:17.226 07:22:18 -- nvmf/common.sh@544 -- # jq . 00:16:17.226 07:22:18 -- nvmf/common.sh@545 -- # IFS=, 00:16:17.226 07:22:18 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:17.226 "params": { 00:16:17.226 "name": "Nvme1", 00:16:17.226 "trtype": "tcp", 00:16:17.226 "traddr": "10.0.0.2", 00:16:17.226 "adrfam": "ipv4", 00:16:17.226 "trsvcid": "4420", 00:16:17.226 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:17.226 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:17.226 "hdgst": false, 00:16:17.226 "ddgst": false 00:16:17.226 }, 00:16:17.226 "method": "bdev_nvme_attach_controller" 00:16:17.226 }' 00:16:17.226 [2024-11-04 07:22:18.886694] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:17.226 [2024-11-04 07:22:18.886948] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87609 ] 00:16:17.226 [2024-11-04 07:22:19.033049] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:17.485 [2024-11-04 07:22:19.103265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.485 [2024-11-04 07:22:19.103405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:17.485 [2024-11-04 07:22:19.103951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.485 [2024-11-04 07:22:19.281864] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:17.485 [2024-11-04 07:22:19.282490] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:17.485 I/O targets: 00:16:17.485 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:17.485 00:16:17.485 00:16:17.485 CUnit - A unit testing framework for C - Version 2.1-3 00:16:17.485 http://cunit.sourceforge.net/ 00:16:17.485 00:16:17.485 00:16:17.485 Suite: bdevio tests on: Nvme1n1 00:16:17.744 Test: blockdev write read block ...passed 00:16:17.744 Test: blockdev write zeroes read block ...passed 00:16:17.744 Test: blockdev write zeroes read no split ...passed 00:16:17.744 Test: blockdev write zeroes read split ...passed 00:16:17.744 Test: blockdev write zeroes read split partial ...passed 00:16:17.744 Test: blockdev reset ...[2024-11-04 07:22:19.397481] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.744 [2024-11-04 07:22:19.397765] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6ed0 (9): Bad file descriptor 00:16:17.744 [2024-11-04 07:22:19.410855] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:17.744 passed 00:16:17.744 Test: blockdev write read 8 blocks ...passed 00:16:17.745 Test: blockdev write read size > 128k ...passed 00:16:17.745 Test: blockdev write read invalid size ...passed 00:16:17.745 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:17.745 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:17.745 Test: blockdev write read max offset ...passed 00:16:17.745 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:17.745 Test: blockdev writev readv 8 blocks ...passed 00:16:17.745 Test: blockdev writev readv 30 x 1block ...passed 00:16:17.745 Test: blockdev writev readv block ...passed 00:16:17.745 Test: blockdev writev readv size > 128k ...passed 00:16:17.745 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:18.004 Test: blockdev comparev and writev ...[2024-11-04 07:22:19.585308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.004 [2024-11-04 07:22:19.585348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.004 [2024-11-04 07:22:19.585368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.004 [2024-11-04 07:22:19.585380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.004 [2024-11-04 07:22:19.585722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.004 [2024-11-04 07:22:19.585760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:18.004 [2024-11-04 07:22:19.585778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.004 [2024-11-04 07:22:19.585788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:18.004 [2024-11-04 07:22:19.586192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.004 [2024-11-04 07:22:19.586216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:18.004 [2024-11-04 07:22:19.586235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.004 [2024-11-04 07:22:19.586255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:18.004 [2024-11-04 07:22:19.586821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.004 [2024-11-04 07:22:19.586858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:18.004 [2024-11-04 07:22:19.586904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.004 [2024-11-04 07:22:19.586924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:18.004 passed 00:16:18.004 Test: blockdev nvme passthru rw ...passed 00:16:18.004 Test: blockdev nvme passthru vendor specific ...[2024-11-04 07:22:19.670158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:18.004 [2024-11-04 07:22:19.670189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:18.004 [2024-11-04 07:22:19.670303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:18.004 [2024-11-04 07:22:19.670321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:18.004 [2024-11-04 07:22:19.670478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:18.004 [2024-11-04 07:22:19.670501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:18.004 [2024-11-04 07:22:19.670617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:18.004 passed 00:16:18.004 Test: blockdev nvme admin passthru ...[2024-11-04 07:22:19.670640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:18.004 passed 00:16:18.004 Test: blockdev copy ...passed 00:16:18.004 00:16:18.004 Run Summary: Type Total Ran Passed Failed Inactive 00:16:18.004 suites 1 1 n/a 0 0 00:16:18.004 tests 23 23 23 0 0 00:16:18.004 asserts 152 152 152 0 n/a 00:16:18.004 00:16:18.004 Elapsed time = 0.892 seconds 00:16:18.263 07:22:19 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:18.263 07:22:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.263 07:22:19 -- common/autotest_common.sh@10 -- # set +x 00:16:18.263 07:22:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:18.263 07:22:19 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:18.263 07:22:19 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:18.263 07:22:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:18.263 07:22:19 -- nvmf/common.sh@116 -- # sync 00:16:18.263 07:22:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:18.263 07:22:19 -- nvmf/common.sh@119 -- # set +e 00:16:18.263 07:22:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:18.263 07:22:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:18.263 rmmod nvme_tcp 00:16:18.263 rmmod nvme_fabrics 00:16:18.263 rmmod nvme_keyring 00:16:18.263 07:22:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:18.263 07:22:20 -- nvmf/common.sh@123 -- # set -e 00:16:18.263 07:22:20 -- nvmf/common.sh@124 -- # return 0 00:16:18.263 07:22:20 -- nvmf/common.sh@477 -- # '[' -n 87554 ']' 00:16:18.263 07:22:20 -- nvmf/common.sh@478 -- # killprocess 87554 00:16:18.263 07:22:20 -- common/autotest_common.sh@926 -- # '[' -z 87554 ']' 00:16:18.263 07:22:20 -- common/autotest_common.sh@930 -- # kill -0 87554 00:16:18.263 07:22:20 -- common/autotest_common.sh@931 -- # uname 00:16:18.263 07:22:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:18.263 07:22:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87554 00:16:18.263 07:22:20 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:16:18.263 07:22:20 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:16:18.263 killing process with pid 87554 00:16:18.263 07:22:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87554' 00:16:18.263 07:22:20 -- common/autotest_common.sh@945 -- # kill 87554 00:16:18.263 07:22:20 -- common/autotest_common.sh@950 -- # wait 87554 00:16:18.522 07:22:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:18.522 07:22:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:18.522 07:22:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:18.522 07:22:20 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:18.522 07:22:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:18.522 07:22:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:18.522 07:22:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:18.522 07:22:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:18.522 07:22:20 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:18.522 00:16:18.522 real 0m3.187s 00:16:18.522 user 0m11.689s 00:16:18.522 sys 0m0.829s 00:16:18.522 07:22:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:18.522 07:22:20 -- common/autotest_common.sh@10 -- # set +x 00:16:18.522 ************************************ 00:16:18.522 END TEST nvmf_bdevio 00:16:18.522 ************************************ 00:16:18.781 07:22:20 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:16:18.781 07:22:20 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:18.781 07:22:20 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:18.781 07:22:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:18.781 07:22:20 -- common/autotest_common.sh@10 -- # set +x 00:16:18.781 ************************************ 00:16:18.781 START TEST nvmf_bdevio_no_huge 00:16:18.781 ************************************ 00:16:18.781 07:22:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:18.781 * Looking for test storage... 00:16:18.781 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:18.781 07:22:20 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:18.781 07:22:20 -- nvmf/common.sh@7 -- # uname -s 00:16:18.781 07:22:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:18.781 07:22:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:18.781 07:22:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:18.781 07:22:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:18.781 07:22:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:18.781 07:22:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:18.781 07:22:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:18.781 07:22:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:18.781 07:22:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:18.781 07:22:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:18.781 07:22:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:16:18.781 07:22:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:16:18.781 07:22:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:18.781 07:22:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:18.781 07:22:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:18.781 07:22:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:18.781 07:22:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:18.781 07:22:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:18.781 07:22:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:18.781 07:22:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.781 07:22:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.781 07:22:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.781 07:22:20 -- paths/export.sh@5 -- # export PATH 00:16:18.782 07:22:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.782 07:22:20 -- nvmf/common.sh@46 -- # : 0 00:16:18.782 07:22:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:18.782 07:22:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:18.782 07:22:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:18.782 07:22:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:18.782 07:22:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:18.782 07:22:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:18.782 07:22:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:18.782 07:22:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:18.782 07:22:20 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:18.782 07:22:20 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:18.782 07:22:20 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:18.782 07:22:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:18.782 07:22:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:18.782 07:22:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:18.782 07:22:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:18.782 07:22:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:18.782 07:22:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:18.782 07:22:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:18.782 07:22:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:18.782 07:22:20 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:18.782 07:22:20 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:18.782 07:22:20 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:18.782 07:22:20 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:18.782 07:22:20 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:18.782 07:22:20 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:18.782 07:22:20 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:18.782 07:22:20 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:18.782 07:22:20 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:18.782 07:22:20 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:18.782 07:22:20 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:18.782 07:22:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:18.782 07:22:20 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:18.782 07:22:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:18.782 07:22:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:18.782 07:22:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:18.782 07:22:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:18.782 07:22:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:18.782 07:22:20 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:18.782 07:22:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:18.782 Cannot find device "nvmf_tgt_br" 00:16:18.782 07:22:20 -- nvmf/common.sh@154 -- # true 00:16:18.782 07:22:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:18.782 Cannot find device "nvmf_tgt_br2" 00:16:18.782 07:22:20 -- nvmf/common.sh@155 -- # true 00:16:18.782 07:22:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:18.782 07:22:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:18.782 Cannot find device "nvmf_tgt_br" 00:16:18.782 07:22:20 -- nvmf/common.sh@157 -- # true 00:16:18.782 07:22:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:18.782 Cannot find device "nvmf_tgt_br2" 00:16:18.782 07:22:20 -- nvmf/common.sh@158 -- # true 00:16:18.782 07:22:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:19.041 07:22:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:19.041 07:22:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:19.041 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:19.041 07:22:20 -- nvmf/common.sh@161 -- # true 00:16:19.041 07:22:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:19.041 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:19.041 07:22:20 -- nvmf/common.sh@162 -- # true 00:16:19.041 07:22:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:19.041 07:22:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:19.041 07:22:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:19.041 07:22:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:19.041 07:22:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:19.041 07:22:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:19.041 07:22:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:19.041 07:22:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:19.041 07:22:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:19.041 07:22:20 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:19.041 07:22:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:19.041 07:22:20 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:19.041 07:22:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:19.041 07:22:20 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:19.041 07:22:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:19.041 07:22:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:19.041 07:22:20 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:19.041 07:22:20 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:19.041 07:22:20 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:19.041 07:22:20 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:19.041 07:22:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:19.041 07:22:20 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:19.300 07:22:20 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:19.300 07:22:20 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:19.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:19.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:16:19.300 00:16:19.300 --- 10.0.0.2 ping statistics --- 00:16:19.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.300 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:16:19.300 07:22:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:19.300 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:19.300 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:16:19.300 00:16:19.300 --- 10.0.0.3 ping statistics --- 00:16:19.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.300 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:16:19.300 07:22:20 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:19.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:19.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:16:19.300 00:16:19.300 --- 10.0.0.1 ping statistics --- 00:16:19.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.300 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:16:19.300 07:22:20 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:19.300 07:22:20 -- nvmf/common.sh@421 -- # return 0 00:16:19.300 07:22:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:19.300 07:22:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:19.300 07:22:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:19.300 07:22:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:19.300 07:22:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:19.300 07:22:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:19.300 07:22:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:19.300 07:22:20 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:19.300 07:22:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:19.300 07:22:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:19.300 07:22:20 -- common/autotest_common.sh@10 -- # set +x 00:16:19.300 07:22:20 -- nvmf/common.sh@469 -- # nvmfpid=87790 00:16:19.300 07:22:20 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:19.300 07:22:20 -- nvmf/common.sh@470 -- # waitforlisten 87790 00:16:19.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.300 07:22:20 -- common/autotest_common.sh@819 -- # '[' -z 87790 ']' 00:16:19.300 07:22:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.300 07:22:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:19.300 07:22:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.300 07:22:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:19.300 07:22:20 -- common/autotest_common.sh@10 -- # set +x 00:16:19.300 [2024-11-04 07:22:20.966586] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:19.300 [2024-11-04 07:22:20.966657] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:19.300 [2024-11-04 07:22:21.102100] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:19.559 [2024-11-04 07:22:21.188200] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:19.559 [2024-11-04 07:22:21.188329] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:19.559 [2024-11-04 07:22:21.188340] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:19.559 [2024-11-04 07:22:21.188349] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:19.559 [2024-11-04 07:22:21.188931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:19.559 [2024-11-04 07:22:21.189021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:19.559 [2024-11-04 07:22:21.189105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:19.559 [2024-11-04 07:22:21.189109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:20.128 07:22:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:20.128 07:22:21 -- common/autotest_common.sh@852 -- # return 0 00:16:20.128 07:22:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:20.128 07:22:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:20.128 07:22:21 -- common/autotest_common.sh@10 -- # set +x 00:16:20.387 07:22:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:20.387 07:22:21 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:20.387 07:22:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:20.387 07:22:21 -- common/autotest_common.sh@10 -- # set +x 00:16:20.387 [2024-11-04 07:22:21.999780] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:20.387 07:22:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:20.387 07:22:22 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:20.387 07:22:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:20.387 07:22:22 -- common/autotest_common.sh@10 -- # set +x 00:16:20.387 Malloc0 00:16:20.387 07:22:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:20.387 07:22:22 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:20.387 07:22:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:20.387 07:22:22 -- common/autotest_common.sh@10 -- # set +x 00:16:20.387 07:22:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:20.387 07:22:22 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:20.387 07:22:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:20.387 07:22:22 -- common/autotest_common.sh@10 -- # set +x 00:16:20.387 07:22:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:20.387 07:22:22 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:20.387 07:22:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:20.387 07:22:22 -- common/autotest_common.sh@10 -- # set +x 00:16:20.388 [2024-11-04 07:22:22.040005] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:20.388 07:22:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:20.388 07:22:22 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:20.388 07:22:22 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:20.388 07:22:22 -- nvmf/common.sh@520 -- # config=() 00:16:20.388 07:22:22 -- nvmf/common.sh@520 -- # local subsystem config 00:16:20.388 07:22:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:20.388 07:22:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:20.388 { 00:16:20.388 "params": { 00:16:20.388 "name": "Nvme$subsystem", 00:16:20.388 "trtype": "$TEST_TRANSPORT", 00:16:20.388 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:20.388 "adrfam": "ipv4", 00:16:20.388 "trsvcid": "$NVMF_PORT", 00:16:20.388 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:20.388 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:20.388 "hdgst": ${hdgst:-false}, 00:16:20.388 "ddgst": ${ddgst:-false} 00:16:20.388 }, 00:16:20.388 "method": "bdev_nvme_attach_controller" 00:16:20.388 } 00:16:20.388 EOF 00:16:20.388 )") 00:16:20.388 07:22:22 -- nvmf/common.sh@542 -- # cat 00:16:20.388 07:22:22 -- nvmf/common.sh@544 -- # jq . 00:16:20.388 07:22:22 -- nvmf/common.sh@545 -- # IFS=, 00:16:20.388 07:22:22 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:20.388 "params": { 00:16:20.388 "name": "Nvme1", 00:16:20.388 "trtype": "tcp", 00:16:20.388 "traddr": "10.0.0.2", 00:16:20.388 "adrfam": "ipv4", 00:16:20.388 "trsvcid": "4420", 00:16:20.388 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:20.388 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:20.388 "hdgst": false, 00:16:20.388 "ddgst": false 00:16:20.388 }, 00:16:20.388 "method": "bdev_nvme_attach_controller" 00:16:20.388 }' 00:16:20.388 [2024-11-04 07:22:22.102706] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:20.388 [2024-11-04 07:22:22.102810] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid87845 ] 00:16:20.647 [2024-11-04 07:22:22.248819] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:20.647 [2024-11-04 07:22:22.353781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:20.647 [2024-11-04 07:22:22.353956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:20.647 [2024-11-04 07:22:22.353959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.905 [2024-11-04 07:22:22.521683] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:20.905 [2024-11-04 07:22:22.521745] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:20.905 I/O targets: 00:16:20.905 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:20.905 00:16:20.905 00:16:20.905 CUnit - A unit testing framework for C - Version 2.1-3 00:16:20.905 http://cunit.sourceforge.net/ 00:16:20.905 00:16:20.905 00:16:20.905 Suite: bdevio tests on: Nvme1n1 00:16:20.905 Test: blockdev write read block ...passed 00:16:20.905 Test: blockdev write zeroes read block ...passed 00:16:20.905 Test: blockdev write zeroes read no split ...passed 00:16:20.905 Test: blockdev write zeroes read split ...passed 00:16:20.905 Test: blockdev write zeroes read split partial ...passed 00:16:20.905 Test: blockdev reset ...[2024-11-04 07:22:22.653951] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:20.905 [2024-11-04 07:22:22.654030] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d42820 (9): Bad file descriptor 00:16:20.905 [2024-11-04 07:22:22.670981] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:20.905 passed 00:16:20.905 Test: blockdev write read 8 blocks ...passed 00:16:20.905 Test: blockdev write read size > 128k ...passed 00:16:20.905 Test: blockdev write read invalid size ...passed 00:16:20.905 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:20.905 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:20.905 Test: blockdev write read max offset ...passed 00:16:21.164 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:21.164 Test: blockdev writev readv 8 blocks ...passed 00:16:21.164 Test: blockdev writev readv 30 x 1block ...passed 00:16:21.164 Test: blockdev writev readv block ...passed 00:16:21.164 Test: blockdev writev readv size > 128k ...passed 00:16:21.164 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:21.164 Test: blockdev comparev and writev ...[2024-11-04 07:22:22.846653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:21.164 [2024-11-04 07:22:22.846730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:21.165 [2024-11-04 07:22:22.846766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:21.165 [2024-11-04 07:22:22.846777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:21.165 [2024-11-04 07:22:22.847407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:21.165 [2024-11-04 07:22:22.847454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:21.165 [2024-11-04 07:22:22.847473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:21.165 [2024-11-04 07:22:22.847484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:21.165 [2024-11-04 07:22:22.847973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:21.165 [2024-11-04 07:22:22.848003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:21.165 [2024-11-04 07:22:22.848021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:21.165 [2024-11-04 07:22:22.848033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:21.165 [2024-11-04 07:22:22.848590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:21.165 [2024-11-04 07:22:22.848635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:21.165 [2024-11-04 07:22:22.848653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:21.165 [2024-11-04 07:22:22.848664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:21.165 passed 00:16:21.165 Test: blockdev nvme passthru rw ...passed 00:16:21.165 Test: blockdev nvme passthru vendor specific ...[2024-11-04 07:22:22.933211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:21.165 [2024-11-04 07:22:22.933239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:21.165 [2024-11-04 07:22:22.933492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:21.165 [2024-11-04 07:22:22.933519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:21.165 [2024-11-04 07:22:22.933709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:21.165 [2024-11-04 07:22:22.933735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:21.165 [2024-11-04 07:22:22.933905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:21.165 [2024-11-04 07:22:22.933935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:21.165 passed 00:16:21.165 Test: blockdev nvme admin passthru ...passed 00:16:21.165 Test: blockdev copy ...passed 00:16:21.165 00:16:21.165 Run Summary: Type Total Ran Passed Failed Inactive 00:16:21.165 suites 1 1 n/a 0 0 00:16:21.165 tests 23 23 23 0 0 00:16:21.165 asserts 152 152 152 0 n/a 00:16:21.165 00:16:21.165 Elapsed time = 0.931 seconds 00:16:21.733 07:22:23 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:21.733 07:22:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:21.733 07:22:23 -- common/autotest_common.sh@10 -- # set +x 00:16:21.733 07:22:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:21.733 07:22:23 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:21.733 07:22:23 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:21.733 07:22:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:21.733 07:22:23 -- nvmf/common.sh@116 -- # sync 00:16:21.733 07:22:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:21.733 07:22:23 -- nvmf/common.sh@119 -- # set +e 00:16:21.733 07:22:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:21.733 07:22:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:21.733 rmmod nvme_tcp 00:16:21.733 rmmod nvme_fabrics 00:16:21.733 rmmod nvme_keyring 00:16:21.733 07:22:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:21.733 07:22:23 -- nvmf/common.sh@123 -- # set -e 00:16:21.733 07:22:23 -- nvmf/common.sh@124 -- # return 0 00:16:21.733 07:22:23 -- nvmf/common.sh@477 -- # '[' -n 87790 ']' 00:16:21.733 07:22:23 -- nvmf/common.sh@478 -- # killprocess 87790 00:16:21.733 07:22:23 -- common/autotest_common.sh@926 -- # '[' -z 87790 ']' 00:16:21.733 07:22:23 -- common/autotest_common.sh@930 -- # kill -0 87790 00:16:21.733 07:22:23 -- common/autotest_common.sh@931 -- # uname 00:16:21.733 07:22:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:21.733 07:22:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87790 00:16:21.733 07:22:23 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:16:21.733 killing process with pid 87790 00:16:21.733 07:22:23 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:16:21.733 07:22:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87790' 00:16:21.733 07:22:23 -- common/autotest_common.sh@945 -- # kill 87790 00:16:21.733 07:22:23 -- common/autotest_common.sh@950 -- # wait 87790 00:16:21.992 07:22:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:21.992 07:22:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:21.992 07:22:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:21.992 07:22:23 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:21.992 07:22:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:21.992 07:22:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.992 07:22:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:21.992 07:22:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.992 07:22:23 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:21.992 00:16:21.992 real 0m3.410s 00:16:21.992 user 0m12.184s 00:16:21.992 sys 0m1.229s 00:16:21.992 07:22:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:21.992 07:22:23 -- common/autotest_common.sh@10 -- # set +x 00:16:21.992 ************************************ 00:16:21.992 END TEST nvmf_bdevio_no_huge 00:16:21.992 ************************************ 00:16:22.251 07:22:23 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:22.251 07:22:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:22.251 07:22:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:22.251 07:22:23 -- common/autotest_common.sh@10 -- # set +x 00:16:22.251 ************************************ 00:16:22.251 START TEST nvmf_tls 00:16:22.251 ************************************ 00:16:22.251 07:22:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:22.251 * Looking for test storage... 00:16:22.251 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:22.251 07:22:23 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:22.251 07:22:23 -- nvmf/common.sh@7 -- # uname -s 00:16:22.251 07:22:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:22.251 07:22:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:22.251 07:22:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:22.251 07:22:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:22.251 07:22:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:22.251 07:22:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:22.251 07:22:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:22.251 07:22:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:22.251 07:22:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:22.251 07:22:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:22.251 07:22:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:16:22.251 07:22:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:16:22.251 07:22:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:22.251 07:22:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:22.251 07:22:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:22.251 07:22:23 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:22.251 07:22:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:22.251 07:22:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:22.251 07:22:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:22.251 07:22:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.251 07:22:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.251 07:22:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.251 07:22:23 -- paths/export.sh@5 -- # export PATH 00:16:22.251 07:22:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.251 07:22:23 -- nvmf/common.sh@46 -- # : 0 00:16:22.251 07:22:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:22.251 07:22:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:22.251 07:22:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:22.251 07:22:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:22.251 07:22:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:22.251 07:22:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:22.251 07:22:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:22.251 07:22:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:22.251 07:22:23 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.251 07:22:23 -- target/tls.sh@71 -- # nvmftestinit 00:16:22.251 07:22:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:22.251 07:22:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:22.251 07:22:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:22.251 07:22:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:22.251 07:22:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:22.251 07:22:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.251 07:22:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:22.251 07:22:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.251 07:22:23 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:22.251 07:22:23 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:22.251 07:22:23 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:22.251 07:22:23 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:22.251 07:22:23 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:22.251 07:22:23 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:22.251 07:22:23 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:22.251 07:22:23 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:22.251 07:22:23 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:22.251 07:22:23 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:22.251 07:22:23 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:22.251 07:22:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:22.251 07:22:23 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:22.251 07:22:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:22.251 07:22:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:22.251 07:22:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:22.251 07:22:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:22.251 07:22:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:22.251 07:22:23 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:22.251 07:22:23 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:22.251 Cannot find device "nvmf_tgt_br" 00:16:22.251 07:22:24 -- nvmf/common.sh@154 -- # true 00:16:22.251 07:22:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:22.251 Cannot find device "nvmf_tgt_br2" 00:16:22.251 07:22:24 -- nvmf/common.sh@155 -- # true 00:16:22.251 07:22:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:22.251 07:22:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:22.251 Cannot find device "nvmf_tgt_br" 00:16:22.251 07:22:24 -- nvmf/common.sh@157 -- # true 00:16:22.251 07:22:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:22.251 Cannot find device "nvmf_tgt_br2" 00:16:22.251 07:22:24 -- nvmf/common.sh@158 -- # true 00:16:22.251 07:22:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:22.251 07:22:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:22.251 07:22:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:22.251 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:22.510 07:22:24 -- nvmf/common.sh@161 -- # true 00:16:22.510 07:22:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:22.510 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:22.510 07:22:24 -- nvmf/common.sh@162 -- # true 00:16:22.510 07:22:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:22.510 07:22:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:22.510 07:22:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:22.510 07:22:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:22.510 07:22:24 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:22.510 07:22:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:22.510 07:22:24 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:22.510 07:22:24 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:22.510 07:22:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:22.510 07:22:24 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:22.510 07:22:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:22.510 07:22:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:22.510 07:22:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:22.510 07:22:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:22.510 07:22:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:22.510 07:22:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:22.510 07:22:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:22.510 07:22:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:22.510 07:22:24 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:22.510 07:22:24 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:22.510 07:22:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:22.510 07:22:24 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:22.510 07:22:24 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:22.510 07:22:24 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:22.510 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:22.510 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:16:22.510 00:16:22.510 --- 10.0.0.2 ping statistics --- 00:16:22.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.510 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:16:22.510 07:22:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:22.510 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:22.510 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:16:22.510 00:16:22.510 --- 10.0.0.3 ping statistics --- 00:16:22.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.510 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:16:22.510 07:22:24 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:22.510 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:22.510 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:16:22.510 00:16:22.510 --- 10.0.0.1 ping statistics --- 00:16:22.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.510 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:16:22.510 07:22:24 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:22.510 07:22:24 -- nvmf/common.sh@421 -- # return 0 00:16:22.510 07:22:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:22.510 07:22:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:22.510 07:22:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:22.510 07:22:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:22.510 07:22:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:22.511 07:22:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:22.511 07:22:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:22.511 07:22:24 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:22.511 07:22:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:22.511 07:22:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:22.511 07:22:24 -- common/autotest_common.sh@10 -- # set +x 00:16:22.511 07:22:24 -- nvmf/common.sh@469 -- # nvmfpid=88033 00:16:22.511 07:22:24 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:22.511 07:22:24 -- nvmf/common.sh@470 -- # waitforlisten 88033 00:16:22.511 07:22:24 -- common/autotest_common.sh@819 -- # '[' -z 88033 ']' 00:16:22.511 07:22:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.511 07:22:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:22.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.511 07:22:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.511 07:22:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:22.511 07:22:24 -- common/autotest_common.sh@10 -- # set +x 00:16:22.769 [2024-11-04 07:22:24.394355] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:22.769 [2024-11-04 07:22:24.394452] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:22.769 [2024-11-04 07:22:24.537525] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.028 [2024-11-04 07:22:24.630962] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:23.028 [2024-11-04 07:22:24.631148] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:23.028 [2024-11-04 07:22:24.631165] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:23.028 [2024-11-04 07:22:24.631178] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:23.028 [2024-11-04 07:22:24.631214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:23.994 07:22:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:23.994 07:22:25 -- common/autotest_common.sh@852 -- # return 0 00:16:23.994 07:22:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:23.994 07:22:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:23.994 07:22:25 -- common/autotest_common.sh@10 -- # set +x 00:16:23.994 07:22:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:23.994 07:22:25 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:16:23.994 07:22:25 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:23.994 true 00:16:23.994 07:22:25 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:23.994 07:22:25 -- target/tls.sh@82 -- # jq -r .tls_version 00:16:24.253 07:22:26 -- target/tls.sh@82 -- # version=0 00:16:24.253 07:22:26 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:16:24.253 07:22:26 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:24.511 07:22:26 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:24.511 07:22:26 -- target/tls.sh@90 -- # jq -r .tls_version 00:16:24.769 07:22:26 -- target/tls.sh@90 -- # version=13 00:16:24.769 07:22:26 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:16:24.769 07:22:26 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:25.027 07:22:26 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:25.027 07:22:26 -- target/tls.sh@98 -- # jq -r .tls_version 00:16:25.285 07:22:26 -- target/tls.sh@98 -- # version=7 00:16:25.285 07:22:26 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:16:25.285 07:22:26 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:16:25.285 07:22:26 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:25.285 07:22:27 -- target/tls.sh@105 -- # ktls=false 00:16:25.285 07:22:27 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:16:25.285 07:22:27 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:25.543 07:22:27 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:16:25.543 07:22:27 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:25.801 07:22:27 -- target/tls.sh@113 -- # ktls=true 00:16:25.801 07:22:27 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:16:25.801 07:22:27 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:26.060 07:22:27 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:26.060 07:22:27 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:16:26.318 07:22:28 -- target/tls.sh@121 -- # ktls=false 00:16:26.318 07:22:28 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:16:26.318 07:22:28 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:16:26.318 07:22:28 -- target/tls.sh@49 -- # local key hash crc 00:16:26.318 07:22:28 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:16:26.318 07:22:28 -- target/tls.sh@51 -- # hash=01 00:16:26.318 07:22:28 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:16:26.318 07:22:28 -- target/tls.sh@52 -- # gzip -1 -c 00:16:26.318 07:22:28 -- target/tls.sh@52 -- # head -c 4 00:16:26.318 07:22:28 -- target/tls.sh@52 -- # tail -c8 00:16:26.318 07:22:28 -- target/tls.sh@52 -- # crc='p$H�' 00:16:26.318 07:22:28 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:26.318 07:22:28 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:16:26.318 07:22:28 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:26.318 07:22:28 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:26.318 07:22:28 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:16:26.318 07:22:28 -- target/tls.sh@49 -- # local key hash crc 00:16:26.318 07:22:28 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:16:26.318 07:22:28 -- target/tls.sh@51 -- # hash=01 00:16:26.318 07:22:28 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:16:26.318 07:22:28 -- target/tls.sh@52 -- # gzip -1 -c 00:16:26.318 07:22:28 -- target/tls.sh@52 -- # tail -c8 00:16:26.318 07:22:28 -- target/tls.sh@52 -- # head -c 4 00:16:26.318 07:22:28 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:16:26.318 07:22:28 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:16:26.318 07:22:28 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:26.318 07:22:28 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:26.318 07:22:28 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:26.318 07:22:28 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:26.318 07:22:28 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:26.318 07:22:28 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:26.318 07:22:28 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:26.318 07:22:28 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:26.318 07:22:28 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:26.318 07:22:28 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:26.577 07:22:28 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:27.144 07:22:28 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:27.144 07:22:28 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:27.144 07:22:28 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:27.144 [2024-11-04 07:22:28.973673] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:27.403 07:22:28 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:27.662 07:22:29 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:27.921 [2024-11-04 07:22:29.529779] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:27.921 [2024-11-04 07:22:29.529987] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.921 07:22:29 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:28.181 malloc0 00:16:28.181 07:22:29 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:28.438 07:22:30 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:28.696 07:22:30 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:40.898 Initializing NVMe Controllers 00:16:40.898 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:40.898 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:40.898 Initialization complete. Launching workers. 00:16:40.898 ======================================================== 00:16:40.898 Latency(us) 00:16:40.898 Device Information : IOPS MiB/s Average min max 00:16:40.898 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11873.89 46.38 5390.97 1675.07 8518.08 00:16:40.898 ======================================================== 00:16:40.898 Total : 11873.89 46.38 5390.97 1675.07 8518.08 00:16:40.898 00:16:40.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:40.898 07:22:40 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:40.898 07:22:40 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:40.898 07:22:40 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:40.898 07:22:40 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:40.898 07:22:40 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:16:40.898 07:22:40 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:40.898 07:22:40 -- target/tls.sh@28 -- # bdevperf_pid=88407 00:16:40.898 07:22:40 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:40.898 07:22:40 -- target/tls.sh@31 -- # waitforlisten 88407 /var/tmp/bdevperf.sock 00:16:40.898 07:22:40 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:40.898 07:22:40 -- common/autotest_common.sh@819 -- # '[' -z 88407 ']' 00:16:40.898 07:22:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:40.898 07:22:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:40.898 07:22:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:40.898 07:22:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:40.898 07:22:40 -- common/autotest_common.sh@10 -- # set +x 00:16:40.898 [2024-11-04 07:22:40.581123] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:40.898 [2024-11-04 07:22:40.581223] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88407 ] 00:16:40.898 [2024-11-04 07:22:40.725452] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.898 [2024-11-04 07:22:40.804236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:40.898 07:22:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:40.898 07:22:41 -- common/autotest_common.sh@852 -- # return 0 00:16:40.898 07:22:41 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:40.898 [2024-11-04 07:22:41.676934] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:40.898 TLSTESTn1 00:16:40.898 07:22:41 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:40.898 Running I/O for 10 seconds... 00:16:50.870 00:16:50.870 Latency(us) 00:16:50.870 [2024-11-04T07:22:52.711Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.870 [2024-11-04T07:22:52.711Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:50.870 Verification LBA range: start 0x0 length 0x2000 00:16:50.870 TLSTESTn1 : 10.01 6515.94 25.45 0.00 0.00 19614.65 4259.84 20733.21 00:16:50.870 [2024-11-04T07:22:52.711Z] =================================================================================================================== 00:16:50.870 [2024-11-04T07:22:52.711Z] Total : 6515.94 25.45 0.00 0.00 19614.65 4259.84 20733.21 00:16:50.870 0 00:16:50.870 07:22:51 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:50.870 07:22:51 -- target/tls.sh@45 -- # killprocess 88407 00:16:50.870 07:22:51 -- common/autotest_common.sh@926 -- # '[' -z 88407 ']' 00:16:50.870 07:22:51 -- common/autotest_common.sh@930 -- # kill -0 88407 00:16:50.870 07:22:51 -- common/autotest_common.sh@931 -- # uname 00:16:50.870 07:22:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:50.870 07:22:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88407 00:16:50.870 07:22:51 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:16:50.870 07:22:51 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:16:50.870 07:22:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88407' 00:16:50.870 killing process with pid 88407 00:16:50.870 07:22:51 -- common/autotest_common.sh@945 -- # kill 88407 00:16:50.870 Received shutdown signal, test time was about 10.000000 seconds 00:16:50.870 00:16:50.870 Latency(us) 00:16:50.870 [2024-11-04T07:22:52.711Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.870 [2024-11-04T07:22:52.711Z] =================================================================================================================== 00:16:50.870 [2024-11-04T07:22:52.711Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:50.870 07:22:51 -- common/autotest_common.sh@950 -- # wait 88407 00:16:50.870 07:22:52 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:50.870 07:22:52 -- common/autotest_common.sh@640 -- # local es=0 00:16:50.870 07:22:52 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:50.870 07:22:52 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:16:50.870 07:22:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:50.870 07:22:52 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:16:50.870 07:22:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:50.870 07:22:52 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:50.870 07:22:52 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:50.870 07:22:52 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:50.870 07:22:52 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:50.870 07:22:52 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:16:50.870 07:22:52 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:50.870 07:22:52 -- target/tls.sh@28 -- # bdevperf_pid=88556 00:16:50.870 07:22:52 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:50.870 07:22:52 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:50.870 07:22:52 -- target/tls.sh@31 -- # waitforlisten 88556 /var/tmp/bdevperf.sock 00:16:50.870 07:22:52 -- common/autotest_common.sh@819 -- # '[' -z 88556 ']' 00:16:50.870 07:22:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:50.870 07:22:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:50.870 07:22:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:50.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:50.870 07:22:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:50.870 07:22:52 -- common/autotest_common.sh@10 -- # set +x 00:16:50.870 [2024-11-04 07:22:52.202868] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:50.870 [2024-11-04 07:22:52.203140] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88556 ] 00:16:50.870 [2024-11-04 07:22:52.333899] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.870 [2024-11-04 07:22:52.389871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:51.437 07:22:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:51.437 07:22:53 -- common/autotest_common.sh@852 -- # return 0 00:16:51.437 07:22:53 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:51.696 [2024-11-04 07:22:53.447539] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:51.696 [2024-11-04 07:22:53.459161] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:51.696 [2024-11-04 07:22:53.459941] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2012cc0 (107): Transport endpoint is not connected 00:16:51.696 [2024-11-04 07:22:53.460931] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2012cc0 (9): Bad file descriptor 00:16:51.696 [2024-11-04 07:22:53.461927] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:51.696 [2024-11-04 07:22:53.461963] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:51.696 [2024-11-04 07:22:53.461973] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:51.696 2024/11/04 07:22:53 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:51.696 request: 00:16:51.696 { 00:16:51.696 "method": "bdev_nvme_attach_controller", 00:16:51.696 "params": { 00:16:51.696 "name": "TLSTEST", 00:16:51.696 "trtype": "tcp", 00:16:51.696 "traddr": "10.0.0.2", 00:16:51.696 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:51.696 "adrfam": "ipv4", 00:16:51.696 "trsvcid": "4420", 00:16:51.696 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.696 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt" 00:16:51.696 } 00:16:51.696 } 00:16:51.696 Got JSON-RPC error response 00:16:51.696 GoRPCClient: error on JSON-RPC call 00:16:51.696 07:22:53 -- target/tls.sh@36 -- # killprocess 88556 00:16:51.696 07:22:53 -- common/autotest_common.sh@926 -- # '[' -z 88556 ']' 00:16:51.696 07:22:53 -- common/autotest_common.sh@930 -- # kill -0 88556 00:16:51.696 07:22:53 -- common/autotest_common.sh@931 -- # uname 00:16:51.696 07:22:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:51.696 07:22:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88556 00:16:51.696 killing process with pid 88556 00:16:51.696 Received shutdown signal, test time was about 10.000000 seconds 00:16:51.696 00:16:51.696 Latency(us) 00:16:51.696 [2024-11-04T07:22:53.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:51.696 [2024-11-04T07:22:53.537Z] =================================================================================================================== 00:16:51.696 [2024-11-04T07:22:53.537Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:51.696 07:22:53 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:16:51.696 07:22:53 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:16:51.696 07:22:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88556' 00:16:51.696 07:22:53 -- common/autotest_common.sh@945 -- # kill 88556 00:16:51.696 07:22:53 -- common/autotest_common.sh@950 -- # wait 88556 00:16:51.955 07:22:53 -- target/tls.sh@37 -- # return 1 00:16:51.955 07:22:53 -- common/autotest_common.sh@643 -- # es=1 00:16:51.955 07:22:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:51.955 07:22:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:51.955 07:22:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:51.955 07:22:53 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:51.955 07:22:53 -- common/autotest_common.sh@640 -- # local es=0 00:16:51.955 07:22:53 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:51.955 07:22:53 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:16:51.955 07:22:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:51.955 07:22:53 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:16:51.955 07:22:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:51.955 07:22:53 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:51.955 07:22:53 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:51.955 07:22:53 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:51.955 07:22:53 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:16:51.955 07:22:53 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:16:51.955 07:22:53 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:51.955 07:22:53 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:51.955 07:22:53 -- target/tls.sh@28 -- # bdevperf_pid=88603 00:16:51.955 07:22:53 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:51.955 07:22:53 -- target/tls.sh@31 -- # waitforlisten 88603 /var/tmp/bdevperf.sock 00:16:51.955 07:22:53 -- common/autotest_common.sh@819 -- # '[' -z 88603 ']' 00:16:51.955 07:22:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:51.955 07:22:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:51.955 07:22:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:51.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:51.955 07:22:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:51.955 07:22:53 -- common/autotest_common.sh@10 -- # set +x 00:16:51.955 [2024-11-04 07:22:53.735229] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:51.955 [2024-11-04 07:22:53.735441] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88603 ] 00:16:52.214 [2024-11-04 07:22:53.859590] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.214 [2024-11-04 07:22:53.914372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:53.149 07:22:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:53.149 07:22:54 -- common/autotest_common.sh@852 -- # return 0 00:16:53.149 07:22:54 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:53.149 [2024-11-04 07:22:54.980424] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:53.149 [2024-11-04 07:22:54.989015] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:53.149 [2024-11-04 07:22:54.989050] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:53.149 [2024-11-04 07:22:54.989095] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:53.408 [2024-11-04 07:22:54.989878] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cdcc0 (107): Transport endpoint is not connected 00:16:53.408 [2024-11-04 07:22:54.990868] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cdcc0 (9): Bad file descriptor 00:16:53.408 [2024-11-04 07:22:54.991863] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:53.408 [2024-11-04 07:22:54.991905] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:53.408 [2024-11-04 07:22:54.991916] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:53.409 2024/11/04 07:22:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:53.409 request: 00:16:53.409 { 00:16:53.409 "method": "bdev_nvme_attach_controller", 00:16:53.409 "params": { 00:16:53.409 "name": "TLSTEST", 00:16:53.409 "trtype": "tcp", 00:16:53.409 "traddr": "10.0.0.2", 00:16:53.409 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:53.409 "adrfam": "ipv4", 00:16:53.409 "trsvcid": "4420", 00:16:53.409 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:53.409 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:16:53.409 } 00:16:53.409 } 00:16:53.409 Got JSON-RPC error response 00:16:53.409 GoRPCClient: error on JSON-RPC call 00:16:53.409 07:22:55 -- target/tls.sh@36 -- # killprocess 88603 00:16:53.409 07:22:55 -- common/autotest_common.sh@926 -- # '[' -z 88603 ']' 00:16:53.409 07:22:55 -- common/autotest_common.sh@930 -- # kill -0 88603 00:16:53.409 07:22:55 -- common/autotest_common.sh@931 -- # uname 00:16:53.409 07:22:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:53.409 07:22:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88603 00:16:53.409 killing process with pid 88603 00:16:53.409 Received shutdown signal, test time was about 10.000000 seconds 00:16:53.409 00:16:53.409 Latency(us) 00:16:53.409 [2024-11-04T07:22:55.250Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.409 [2024-11-04T07:22:55.250Z] =================================================================================================================== 00:16:53.409 [2024-11-04T07:22:55.250Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:53.409 07:22:55 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:16:53.409 07:22:55 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:16:53.409 07:22:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88603' 00:16:53.409 07:22:55 -- common/autotest_common.sh@945 -- # kill 88603 00:16:53.409 07:22:55 -- common/autotest_common.sh@950 -- # wait 88603 00:16:53.409 07:22:55 -- target/tls.sh@37 -- # return 1 00:16:53.409 07:22:55 -- common/autotest_common.sh@643 -- # es=1 00:16:53.409 07:22:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:53.409 07:22:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:53.409 07:22:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:53.409 07:22:55 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:53.409 07:22:55 -- common/autotest_common.sh@640 -- # local es=0 00:16:53.409 07:22:55 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:53.409 07:22:55 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:16:53.409 07:22:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:53.409 07:22:55 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:16:53.409 07:22:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:53.409 07:22:55 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:53.409 07:22:55 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:53.409 07:22:55 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:53.409 07:22:55 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:53.409 07:22:55 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:16:53.409 07:22:55 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:53.409 07:22:55 -- target/tls.sh@28 -- # bdevperf_pid=88643 00:16:53.409 07:22:55 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:53.409 07:22:55 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:53.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:53.409 07:22:55 -- target/tls.sh@31 -- # waitforlisten 88643 /var/tmp/bdevperf.sock 00:16:53.409 07:22:55 -- common/autotest_common.sh@819 -- # '[' -z 88643 ']' 00:16:53.409 07:22:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:53.409 07:22:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:53.409 07:22:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:53.409 07:22:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:53.409 07:22:55 -- common/autotest_common.sh@10 -- # set +x 00:16:53.668 [2024-11-04 07:22:55.281442] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:53.668 [2024-11-04 07:22:55.281543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88643 ] 00:16:53.668 [2024-11-04 07:22:55.420212] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.668 [2024-11-04 07:22:55.494995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:54.603 07:22:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:54.603 07:22:56 -- common/autotest_common.sh@852 -- # return 0 00:16:54.603 07:22:56 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:54.861 [2024-11-04 07:22:56.463348] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:54.861 [2024-11-04 07:22:56.469103] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:54.861 [2024-11-04 07:22:56.469147] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:54.861 [2024-11-04 07:22:56.469189] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:54.862 [2024-11-04 07:22:56.469687] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d90cc0 (107): Transport endpoint is not connected 00:16:54.862 [2024-11-04 07:22:56.470677] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d90cc0 (9): Bad file descriptor 00:16:54.862 [2024-11-04 07:22:56.471672] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:16:54.862 [2024-11-04 07:22:56.471708] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:54.862 [2024-11-04 07:22:56.471733] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:16:54.862 2024/11/04 07:22:56 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:54.862 request: 00:16:54.862 { 00:16:54.862 "method": "bdev_nvme_attach_controller", 00:16:54.862 "params": { 00:16:54.862 "name": "TLSTEST", 00:16:54.862 "trtype": "tcp", 00:16:54.862 "traddr": "10.0.0.2", 00:16:54.862 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:54.862 "adrfam": "ipv4", 00:16:54.862 "trsvcid": "4420", 00:16:54.862 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:54.862 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:16:54.862 } 00:16:54.862 } 00:16:54.862 Got JSON-RPC error response 00:16:54.862 GoRPCClient: error on JSON-RPC call 00:16:54.862 07:22:56 -- target/tls.sh@36 -- # killprocess 88643 00:16:54.862 07:22:56 -- common/autotest_common.sh@926 -- # '[' -z 88643 ']' 00:16:54.862 07:22:56 -- common/autotest_common.sh@930 -- # kill -0 88643 00:16:54.862 07:22:56 -- common/autotest_common.sh@931 -- # uname 00:16:54.862 07:22:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:54.862 07:22:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88643 00:16:54.862 killing process with pid 88643 00:16:54.862 Received shutdown signal, test time was about 10.000000 seconds 00:16:54.862 00:16:54.862 Latency(us) 00:16:54.862 [2024-11-04T07:22:56.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.862 [2024-11-04T07:22:56.703Z] =================================================================================================================== 00:16:54.862 [2024-11-04T07:22:56.703Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:54.862 07:22:56 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:16:54.862 07:22:56 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:16:54.862 07:22:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88643' 00:16:54.862 07:22:56 -- common/autotest_common.sh@945 -- # kill 88643 00:16:54.862 07:22:56 -- common/autotest_common.sh@950 -- # wait 88643 00:16:55.121 07:22:56 -- target/tls.sh@37 -- # return 1 00:16:55.121 07:22:56 -- common/autotest_common.sh@643 -- # es=1 00:16:55.121 07:22:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:55.121 07:22:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:55.121 07:22:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:55.121 07:22:56 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:55.121 07:22:56 -- common/autotest_common.sh@640 -- # local es=0 00:16:55.121 07:22:56 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:55.121 07:22:56 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:16:55.121 07:22:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:55.121 07:22:56 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:16:55.121 07:22:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:55.121 07:22:56 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:55.121 07:22:56 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:55.121 07:22:56 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:55.121 07:22:56 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:55.121 07:22:56 -- target/tls.sh@23 -- # psk= 00:16:55.121 07:22:56 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:55.121 07:22:56 -- target/tls.sh@28 -- # bdevperf_pid=88694 00:16:55.121 07:22:56 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:55.121 07:22:56 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:55.121 07:22:56 -- target/tls.sh@31 -- # waitforlisten 88694 /var/tmp/bdevperf.sock 00:16:55.121 07:22:56 -- common/autotest_common.sh@819 -- # '[' -z 88694 ']' 00:16:55.121 07:22:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:55.121 07:22:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:55.121 07:22:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:55.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:55.121 07:22:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:55.121 07:22:56 -- common/autotest_common.sh@10 -- # set +x 00:16:55.121 [2024-11-04 07:22:56.750359] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:55.121 [2024-11-04 07:22:56.750640] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88694 ] 00:16:55.121 [2024-11-04 07:22:56.883006] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.121 [2024-11-04 07:22:56.946890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:56.072 07:22:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:56.072 07:22:57 -- common/autotest_common.sh@852 -- # return 0 00:16:56.072 07:22:57 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:56.346 [2024-11-04 07:22:57.920034] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:56.346 [2024-11-04 07:22:57.921991] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf5b8c0 (9): Bad file descriptor 00:16:56.346 [2024-11-04 07:22:57.922977] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:56.346 [2024-11-04 07:22:57.923015] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:56.346 [2024-11-04 07:22:57.923025] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:56.346 2024/11/04 07:22:57 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:56.346 request: 00:16:56.346 { 00:16:56.346 "method": "bdev_nvme_attach_controller", 00:16:56.346 "params": { 00:16:56.346 "name": "TLSTEST", 00:16:56.346 "trtype": "tcp", 00:16:56.346 "traddr": "10.0.0.2", 00:16:56.346 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:56.346 "adrfam": "ipv4", 00:16:56.346 "trsvcid": "4420", 00:16:56.346 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:16:56.346 } 00:16:56.346 } 00:16:56.346 Got JSON-RPC error response 00:16:56.346 GoRPCClient: error on JSON-RPC call 00:16:56.346 07:22:57 -- target/tls.sh@36 -- # killprocess 88694 00:16:56.346 07:22:57 -- common/autotest_common.sh@926 -- # '[' -z 88694 ']' 00:16:56.346 07:22:57 -- common/autotest_common.sh@930 -- # kill -0 88694 00:16:56.346 07:22:57 -- common/autotest_common.sh@931 -- # uname 00:16:56.346 07:22:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:56.346 07:22:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88694 00:16:56.346 killing process with pid 88694 00:16:56.346 Received shutdown signal, test time was about 10.000000 seconds 00:16:56.346 00:16:56.346 Latency(us) 00:16:56.346 [2024-11-04T07:22:58.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.346 [2024-11-04T07:22:58.187Z] =================================================================================================================== 00:16:56.346 [2024-11-04T07:22:58.187Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:56.346 07:22:57 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:16:56.346 07:22:57 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:16:56.346 07:22:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88694' 00:16:56.346 07:22:57 -- common/autotest_common.sh@945 -- # kill 88694 00:16:56.346 07:22:57 -- common/autotest_common.sh@950 -- # wait 88694 00:16:56.346 07:22:58 -- target/tls.sh@37 -- # return 1 00:16:56.346 07:22:58 -- common/autotest_common.sh@643 -- # es=1 00:16:56.346 07:22:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:56.346 07:22:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:56.346 07:22:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:56.346 07:22:58 -- target/tls.sh@167 -- # killprocess 88033 00:16:56.346 07:22:58 -- common/autotest_common.sh@926 -- # '[' -z 88033 ']' 00:16:56.346 07:22:58 -- common/autotest_common.sh@930 -- # kill -0 88033 00:16:56.346 07:22:58 -- common/autotest_common.sh@931 -- # uname 00:16:56.346 07:22:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:56.346 07:22:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88033 00:16:56.346 killing process with pid 88033 00:16:56.346 07:22:58 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:56.346 07:22:58 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:56.346 07:22:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88033' 00:16:56.346 07:22:58 -- common/autotest_common.sh@945 -- # kill 88033 00:16:56.346 07:22:58 -- common/autotest_common.sh@950 -- # wait 88033 00:16:56.605 07:22:58 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:16:56.605 07:22:58 -- target/tls.sh@49 -- # local key hash crc 00:16:56.605 07:22:58 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:56.605 07:22:58 -- target/tls.sh@51 -- # hash=02 00:16:56.605 07:22:58 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:16:56.605 07:22:58 -- target/tls.sh@52 -- # gzip -1 -c 00:16:56.605 07:22:58 -- target/tls.sh@52 -- # head -c 4 00:16:56.605 07:22:58 -- target/tls.sh@52 -- # tail -c8 00:16:56.863 07:22:58 -- target/tls.sh@52 -- # crc='�e�'\''' 00:16:56.863 07:22:58 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:56.863 07:22:58 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:16:56.863 07:22:58 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:56.863 07:22:58 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:56.863 07:22:58 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:56.863 07:22:58 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:56.863 07:22:58 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:56.863 07:22:58 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:16:56.863 07:22:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:56.864 07:22:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:56.864 07:22:58 -- common/autotest_common.sh@10 -- # set +x 00:16:56.864 07:22:58 -- nvmf/common.sh@469 -- # nvmfpid=88750 00:16:56.864 07:22:58 -- nvmf/common.sh@470 -- # waitforlisten 88750 00:16:56.864 07:22:58 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:56.864 07:22:58 -- common/autotest_common.sh@819 -- # '[' -z 88750 ']' 00:16:56.864 07:22:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.864 07:22:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:56.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.864 07:22:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.864 07:22:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:56.864 07:22:58 -- common/autotest_common.sh@10 -- # set +x 00:16:56.864 [2024-11-04 07:22:58.523179] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:56.864 [2024-11-04 07:22:58.523279] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.864 [2024-11-04 07:22:58.663669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.122 [2024-11-04 07:22:58.729131] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:57.122 [2024-11-04 07:22:58.729277] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:57.122 [2024-11-04 07:22:58.729289] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:57.122 [2024-11-04 07:22:58.729298] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:57.122 [2024-11-04 07:22:58.729324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.690 07:22:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:57.690 07:22:59 -- common/autotest_common.sh@852 -- # return 0 00:16:57.690 07:22:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:57.690 07:22:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:57.690 07:22:59 -- common/autotest_common.sh@10 -- # set +x 00:16:57.953 07:22:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:57.953 07:22:59 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:57.953 07:22:59 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:57.953 07:22:59 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:58.215 [2024-11-04 07:22:59.818102] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:58.215 07:22:59 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:58.473 07:23:00 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:58.732 [2024-11-04 07:23:00.378193] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:58.732 [2024-11-04 07:23:00.378475] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:58.732 07:23:00 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:58.990 malloc0 00:16:58.990 07:23:00 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:59.249 07:23:00 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:59.507 07:23:01 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:59.507 07:23:01 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:59.508 07:23:01 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:59.508 07:23:01 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:59.508 07:23:01 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:16:59.508 07:23:01 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:59.508 07:23:01 -- target/tls.sh@28 -- # bdevperf_pid=88857 00:16:59.508 07:23:01 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:59.508 07:23:01 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:59.508 07:23:01 -- target/tls.sh@31 -- # waitforlisten 88857 /var/tmp/bdevperf.sock 00:16:59.508 07:23:01 -- common/autotest_common.sh@819 -- # '[' -z 88857 ']' 00:16:59.508 07:23:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:59.508 07:23:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:59.508 07:23:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:59.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:59.508 07:23:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:59.508 07:23:01 -- common/autotest_common.sh@10 -- # set +x 00:16:59.508 [2024-11-04 07:23:01.198035] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:59.508 [2024-11-04 07:23:01.198124] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88857 ] 00:16:59.508 [2024-11-04 07:23:01.332596] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.767 [2024-11-04 07:23:01.406303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:00.333 07:23:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:00.333 07:23:02 -- common/autotest_common.sh@852 -- # return 0 00:17:00.333 07:23:02 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:00.592 [2024-11-04 07:23:02.261325] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:00.592 TLSTESTn1 00:17:00.592 07:23:02 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:00.850 Running I/O for 10 seconds... 00:17:10.827 00:17:10.827 Latency(us) 00:17:10.827 [2024-11-04T07:23:12.668Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.827 [2024-11-04T07:23:12.668Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:10.827 Verification LBA range: start 0x0 length 0x2000 00:17:10.827 TLSTESTn1 : 10.01 6436.32 25.14 0.00 0.00 19857.60 4676.89 19065.02 00:17:10.827 [2024-11-04T07:23:12.668Z] =================================================================================================================== 00:17:10.827 [2024-11-04T07:23:12.668Z] Total : 6436.32 25.14 0.00 0.00 19857.60 4676.89 19065.02 00:17:10.827 0 00:17:10.827 07:23:12 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:10.827 07:23:12 -- target/tls.sh@45 -- # killprocess 88857 00:17:10.827 07:23:12 -- common/autotest_common.sh@926 -- # '[' -z 88857 ']' 00:17:10.827 07:23:12 -- common/autotest_common.sh@930 -- # kill -0 88857 00:17:10.827 07:23:12 -- common/autotest_common.sh@931 -- # uname 00:17:10.827 07:23:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:10.827 07:23:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88857 00:17:10.827 killing process with pid 88857 00:17:10.827 Received shutdown signal, test time was about 10.000000 seconds 00:17:10.827 00:17:10.827 Latency(us) 00:17:10.827 [2024-11-04T07:23:12.668Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.827 [2024-11-04T07:23:12.668Z] =================================================================================================================== 00:17:10.827 [2024-11-04T07:23:12.668Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:10.827 07:23:12 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:10.827 07:23:12 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:10.827 07:23:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88857' 00:17:10.827 07:23:12 -- common/autotest_common.sh@945 -- # kill 88857 00:17:10.827 07:23:12 -- common/autotest_common.sh@950 -- # wait 88857 00:17:11.086 07:23:12 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:11.086 07:23:12 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:11.086 07:23:12 -- common/autotest_common.sh@640 -- # local es=0 00:17:11.087 07:23:12 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:11.087 07:23:12 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:11.087 07:23:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:11.087 07:23:12 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:11.087 07:23:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:11.087 07:23:12 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:11.087 07:23:12 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:11.087 07:23:12 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:11.087 07:23:12 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:11.087 07:23:12 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:11.087 07:23:12 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:11.087 07:23:12 -- target/tls.sh@28 -- # bdevperf_pid=89005 00:17:11.087 07:23:12 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:11.087 07:23:12 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:11.087 07:23:12 -- target/tls.sh@31 -- # waitforlisten 89005 /var/tmp/bdevperf.sock 00:17:11.087 07:23:12 -- common/autotest_common.sh@819 -- # '[' -z 89005 ']' 00:17:11.087 07:23:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:11.087 07:23:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:11.087 07:23:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:11.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:11.087 07:23:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:11.087 07:23:12 -- common/autotest_common.sh@10 -- # set +x 00:17:11.087 [2024-11-04 07:23:12.798009] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:11.087 [2024-11-04 07:23:12.798254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89005 ] 00:17:11.345 [2024-11-04 07:23:12.936424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.345 [2024-11-04 07:23:12.995978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:11.913 07:23:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:11.913 07:23:13 -- common/autotest_common.sh@852 -- # return 0 00:17:11.913 07:23:13 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:12.172 [2024-11-04 07:23:13.969897] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:12.172 [2024-11-04 07:23:13.969954] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:12.172 2024/11/04 07:23:13 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-22 Msg=Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:12.172 request: 00:17:12.172 { 00:17:12.172 "method": "bdev_nvme_attach_controller", 00:17:12.172 "params": { 00:17:12.172 "name": "TLSTEST", 00:17:12.172 "trtype": "tcp", 00:17:12.172 "traddr": "10.0.0.2", 00:17:12.172 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:12.172 "adrfam": "ipv4", 00:17:12.172 "trsvcid": "4420", 00:17:12.172 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:12.172 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:12.172 } 00:17:12.172 } 00:17:12.172 Got JSON-RPC error response 00:17:12.172 GoRPCClient: error on JSON-RPC call 00:17:12.172 07:23:13 -- target/tls.sh@36 -- # killprocess 89005 00:17:12.172 07:23:13 -- common/autotest_common.sh@926 -- # '[' -z 89005 ']' 00:17:12.172 07:23:13 -- common/autotest_common.sh@930 -- # kill -0 89005 00:17:12.172 07:23:13 -- common/autotest_common.sh@931 -- # uname 00:17:12.172 07:23:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:12.172 07:23:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89005 00:17:12.431 killing process with pid 89005 00:17:12.431 Received shutdown signal, test time was about 10.000000 seconds 00:17:12.431 00:17:12.431 Latency(us) 00:17:12.431 [2024-11-04T07:23:14.272Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.431 [2024-11-04T07:23:14.272Z] =================================================================================================================== 00:17:12.431 [2024-11-04T07:23:14.272Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:12.431 07:23:14 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:12.431 07:23:14 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:12.431 07:23:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89005' 00:17:12.431 07:23:14 -- common/autotest_common.sh@945 -- # kill 89005 00:17:12.431 07:23:14 -- common/autotest_common.sh@950 -- # wait 89005 00:17:12.431 07:23:14 -- target/tls.sh@37 -- # return 1 00:17:12.431 07:23:14 -- common/autotest_common.sh@643 -- # es=1 00:17:12.431 07:23:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:12.431 07:23:14 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:12.431 07:23:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:12.431 07:23:14 -- target/tls.sh@183 -- # killprocess 88750 00:17:12.431 07:23:14 -- common/autotest_common.sh@926 -- # '[' -z 88750 ']' 00:17:12.431 07:23:14 -- common/autotest_common.sh@930 -- # kill -0 88750 00:17:12.431 07:23:14 -- common/autotest_common.sh@931 -- # uname 00:17:12.431 07:23:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:12.431 07:23:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88750 00:17:12.431 killing process with pid 88750 00:17:12.431 07:23:14 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:12.431 07:23:14 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:12.431 07:23:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88750' 00:17:12.431 07:23:14 -- common/autotest_common.sh@945 -- # kill 88750 00:17:12.431 07:23:14 -- common/autotest_common.sh@950 -- # wait 88750 00:17:12.690 07:23:14 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:12.690 07:23:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:12.690 07:23:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:12.690 07:23:14 -- common/autotest_common.sh@10 -- # set +x 00:17:12.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.690 07:23:14 -- nvmf/common.sh@469 -- # nvmfpid=89057 00:17:12.690 07:23:14 -- nvmf/common.sh@470 -- # waitforlisten 89057 00:17:12.690 07:23:14 -- common/autotest_common.sh@819 -- # '[' -z 89057 ']' 00:17:12.690 07:23:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.690 07:23:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:12.690 07:23:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.690 07:23:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:12.690 07:23:14 -- common/autotest_common.sh@10 -- # set +x 00:17:12.690 07:23:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:12.949 [2024-11-04 07:23:14.543691] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:12.949 [2024-11-04 07:23:14.543782] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:12.949 [2024-11-04 07:23:14.683653] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.949 [2024-11-04 07:23:14.746027] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:12.949 [2024-11-04 07:23:14.746171] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:12.949 [2024-11-04 07:23:14.746182] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:12.949 [2024-11-04 07:23:14.746190] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:12.949 [2024-11-04 07:23:14.746215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:13.885 07:23:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:13.885 07:23:15 -- common/autotest_common.sh@852 -- # return 0 00:17:13.885 07:23:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:13.885 07:23:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:13.885 07:23:15 -- common/autotest_common.sh@10 -- # set +x 00:17:13.885 07:23:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:13.885 07:23:15 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:13.885 07:23:15 -- common/autotest_common.sh@640 -- # local es=0 00:17:13.885 07:23:15 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:13.885 07:23:15 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:17:13.885 07:23:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:13.885 07:23:15 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:17:13.885 07:23:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:13.885 07:23:15 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:13.885 07:23:15 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:13.885 07:23:15 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:14.143 [2024-11-04 07:23:15.727732] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:14.143 07:23:15 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:14.143 07:23:15 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:14.402 [2024-11-04 07:23:16.127793] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:14.402 [2024-11-04 07:23:16.128055] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:14.402 07:23:16 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:14.661 malloc0 00:17:14.661 07:23:16 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:14.919 07:23:16 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:15.178 [2024-11-04 07:23:16.825652] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:15.178 [2024-11-04 07:23:16.825680] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:15.178 [2024-11-04 07:23:16.825697] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:17:15.178 2024/11/04 07:23:16 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:17:15.178 request: 00:17:15.178 { 00:17:15.178 "method": "nvmf_subsystem_add_host", 00:17:15.178 "params": { 00:17:15.178 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:15.178 "host": "nqn.2016-06.io.spdk:host1", 00:17:15.178 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:15.178 } 00:17:15.178 } 00:17:15.178 Got JSON-RPC error response 00:17:15.178 GoRPCClient: error on JSON-RPC call 00:17:15.178 07:23:16 -- common/autotest_common.sh@643 -- # es=1 00:17:15.178 07:23:16 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:15.178 07:23:16 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:15.178 07:23:16 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:15.178 07:23:16 -- target/tls.sh@189 -- # killprocess 89057 00:17:15.178 07:23:16 -- common/autotest_common.sh@926 -- # '[' -z 89057 ']' 00:17:15.178 07:23:16 -- common/autotest_common.sh@930 -- # kill -0 89057 00:17:15.178 07:23:16 -- common/autotest_common.sh@931 -- # uname 00:17:15.178 07:23:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:15.178 07:23:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89057 00:17:15.178 killing process with pid 89057 00:17:15.178 07:23:16 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:15.178 07:23:16 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:15.178 07:23:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89057' 00:17:15.178 07:23:16 -- common/autotest_common.sh@945 -- # kill 89057 00:17:15.178 07:23:16 -- common/autotest_common.sh@950 -- # wait 89057 00:17:15.437 07:23:17 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:15.437 07:23:17 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:17:15.437 07:23:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:15.437 07:23:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:15.437 07:23:17 -- common/autotest_common.sh@10 -- # set +x 00:17:15.437 07:23:17 -- nvmf/common.sh@469 -- # nvmfpid=89168 00:17:15.437 07:23:17 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:15.437 07:23:17 -- nvmf/common.sh@470 -- # waitforlisten 89168 00:17:15.437 07:23:17 -- common/autotest_common.sh@819 -- # '[' -z 89168 ']' 00:17:15.437 07:23:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.437 07:23:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:15.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.437 07:23:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.437 07:23:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:15.437 07:23:17 -- common/autotest_common.sh@10 -- # set +x 00:17:15.437 [2024-11-04 07:23:17.220783] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:15.437 [2024-11-04 07:23:17.221347] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:15.695 [2024-11-04 07:23:17.355952] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.695 [2024-11-04 07:23:17.417831] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:15.695 [2024-11-04 07:23:17.417999] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:15.695 [2024-11-04 07:23:17.418011] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:15.695 [2024-11-04 07:23:17.418020] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:15.695 [2024-11-04 07:23:17.418052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.631 07:23:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:16.631 07:23:18 -- common/autotest_common.sh@852 -- # return 0 00:17:16.631 07:23:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:16.631 07:23:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:16.631 07:23:18 -- common/autotest_common.sh@10 -- # set +x 00:17:16.631 07:23:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:16.631 07:23:18 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:16.631 07:23:18 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:16.631 07:23:18 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:16.631 [2024-11-04 07:23:18.395345] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:16.631 07:23:18 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:16.890 07:23:18 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:17.149 [2024-11-04 07:23:18.839413] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:17.149 [2024-11-04 07:23:18.839644] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:17.149 07:23:18 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:17.407 malloc0 00:17:17.407 07:23:19 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:17.666 07:23:19 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:17.924 07:23:19 -- target/tls.sh@197 -- # bdevperf_pid=89269 00:17:17.924 07:23:19 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:17.924 07:23:19 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:17.924 07:23:19 -- target/tls.sh@200 -- # waitforlisten 89269 /var/tmp/bdevperf.sock 00:17:17.924 07:23:19 -- common/autotest_common.sh@819 -- # '[' -z 89269 ']' 00:17:17.924 07:23:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:17.924 07:23:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:17.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:17.924 07:23:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:17.924 07:23:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:17.924 07:23:19 -- common/autotest_common.sh@10 -- # set +x 00:17:17.924 [2024-11-04 07:23:19.685225] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:17.924 [2024-11-04 07:23:19.685340] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89269 ] 00:17:18.184 [2024-11-04 07:23:19.827472] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.184 [2024-11-04 07:23:19.887616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:19.119 07:23:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:19.119 07:23:20 -- common/autotest_common.sh@852 -- # return 0 00:17:19.119 07:23:20 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:19.119 [2024-11-04 07:23:20.881140] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:19.119 TLSTESTn1 00:17:19.391 07:23:20 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:19.675 07:23:21 -- target/tls.sh@205 -- # tgtconf='{ 00:17:19.675 "subsystems": [ 00:17:19.675 { 00:17:19.675 "subsystem": "iobuf", 00:17:19.675 "config": [ 00:17:19.675 { 00:17:19.675 "method": "iobuf_set_options", 00:17:19.675 "params": { 00:17:19.675 "large_bufsize": 135168, 00:17:19.675 "large_pool_count": 1024, 00:17:19.675 "small_bufsize": 8192, 00:17:19.675 "small_pool_count": 8192 00:17:19.675 } 00:17:19.675 } 00:17:19.675 ] 00:17:19.675 }, 00:17:19.675 { 00:17:19.675 "subsystem": "sock", 00:17:19.675 "config": [ 00:17:19.675 { 00:17:19.675 "method": "sock_impl_set_options", 00:17:19.675 "params": { 00:17:19.675 "enable_ktls": false, 00:17:19.675 "enable_placement_id": 0, 00:17:19.675 "enable_quickack": false, 00:17:19.675 "enable_recv_pipe": true, 00:17:19.675 "enable_zerocopy_send_client": false, 00:17:19.675 "enable_zerocopy_send_server": true, 00:17:19.675 "impl_name": "posix", 00:17:19.675 "recv_buf_size": 2097152, 00:17:19.675 "send_buf_size": 2097152, 00:17:19.675 "tls_version": 0, 00:17:19.675 "zerocopy_threshold": 0 00:17:19.675 } 00:17:19.675 }, 00:17:19.675 { 00:17:19.675 "method": "sock_impl_set_options", 00:17:19.675 "params": { 00:17:19.675 "enable_ktls": false, 00:17:19.675 "enable_placement_id": 0, 00:17:19.675 "enable_quickack": false, 00:17:19.675 "enable_recv_pipe": true, 00:17:19.675 "enable_zerocopy_send_client": false, 00:17:19.675 "enable_zerocopy_send_server": true, 00:17:19.675 "impl_name": "ssl", 00:17:19.675 "recv_buf_size": 4096, 00:17:19.675 "send_buf_size": 4096, 00:17:19.675 "tls_version": 0, 00:17:19.675 "zerocopy_threshold": 0 00:17:19.675 } 00:17:19.675 } 00:17:19.675 ] 00:17:19.675 }, 00:17:19.675 { 00:17:19.675 "subsystem": "vmd", 00:17:19.675 "config": [] 00:17:19.675 }, 00:17:19.675 { 00:17:19.675 "subsystem": "accel", 00:17:19.675 "config": [ 00:17:19.675 { 00:17:19.675 "method": "accel_set_options", 00:17:19.675 "params": { 00:17:19.675 "buf_count": 2048, 00:17:19.675 "large_cache_size": 16, 00:17:19.675 "sequence_count": 2048, 00:17:19.675 "small_cache_size": 128, 00:17:19.675 "task_count": 2048 00:17:19.675 } 00:17:19.675 } 00:17:19.675 ] 00:17:19.675 }, 00:17:19.675 { 00:17:19.675 "subsystem": "bdev", 00:17:19.675 "config": [ 00:17:19.675 { 00:17:19.675 "method": "bdev_set_options", 00:17:19.675 "params": { 00:17:19.675 "bdev_auto_examine": true, 00:17:19.675 "bdev_io_cache_size": 256, 00:17:19.675 "bdev_io_pool_size": 65535, 00:17:19.675 "iobuf_large_cache_size": 16, 00:17:19.675 "iobuf_small_cache_size": 128 00:17:19.675 } 00:17:19.675 }, 00:17:19.675 { 00:17:19.675 "method": "bdev_raid_set_options", 00:17:19.675 "params": { 00:17:19.675 "process_window_size_kb": 1024 00:17:19.675 } 00:17:19.675 }, 00:17:19.675 { 00:17:19.675 "method": "bdev_iscsi_set_options", 00:17:19.675 "params": { 00:17:19.675 "timeout_sec": 30 00:17:19.675 } 00:17:19.675 }, 00:17:19.675 { 00:17:19.675 "method": "bdev_nvme_set_options", 00:17:19.675 "params": { 00:17:19.675 "action_on_timeout": "none", 00:17:19.675 "allow_accel_sequence": false, 00:17:19.675 "arbitration_burst": 0, 00:17:19.675 "bdev_retry_count": 3, 00:17:19.675 "ctrlr_loss_timeout_sec": 0, 00:17:19.675 "delay_cmd_submit": true, 00:17:19.675 "fast_io_fail_timeout_sec": 0, 00:17:19.675 "generate_uuids": false, 00:17:19.675 "high_priority_weight": 0, 00:17:19.675 "io_path_stat": false, 00:17:19.675 "io_queue_requests": 0, 00:17:19.675 "keep_alive_timeout_ms": 10000, 00:17:19.675 "low_priority_weight": 0, 00:17:19.675 "medium_priority_weight": 0, 00:17:19.675 "nvme_adminq_poll_period_us": 10000, 00:17:19.675 "nvme_ioq_poll_period_us": 0, 00:17:19.675 "reconnect_delay_sec": 0, 00:17:19.675 "timeout_admin_us": 0, 00:17:19.675 "timeout_us": 0, 00:17:19.675 "transport_ack_timeout": 0, 00:17:19.675 "transport_retry_count": 4, 00:17:19.675 "transport_tos": 0 00:17:19.675 } 00:17:19.675 }, 00:17:19.675 { 00:17:19.675 "method": "bdev_nvme_set_hotplug", 00:17:19.675 "params": { 00:17:19.675 "enable": false, 00:17:19.675 "period_us": 100000 00:17:19.675 } 00:17:19.675 }, 00:17:19.675 { 00:17:19.675 "method": "bdev_malloc_create", 00:17:19.675 "params": { 00:17:19.675 "block_size": 4096, 00:17:19.675 "name": "malloc0", 00:17:19.675 "num_blocks": 8192, 00:17:19.675 "optimal_io_boundary": 0, 00:17:19.675 "physical_block_size": 4096, 00:17:19.675 "uuid": "6783c39c-c0f6-442c-8aac-3004b4a6c74f" 00:17:19.675 } 00:17:19.675 }, 00:17:19.675 { 00:17:19.675 "method": "bdev_wait_for_examine" 00:17:19.675 } 00:17:19.675 ] 00:17:19.675 }, 00:17:19.675 { 00:17:19.675 "subsystem": "nbd", 00:17:19.675 "config": [] 00:17:19.675 }, 00:17:19.675 { 00:17:19.675 "subsystem": "scheduler", 00:17:19.675 "config": [ 00:17:19.675 { 00:17:19.675 "method": "framework_set_scheduler", 00:17:19.675 "params": { 00:17:19.675 "name": "static" 00:17:19.675 } 00:17:19.675 } 00:17:19.676 ] 00:17:19.676 }, 00:17:19.676 { 00:17:19.676 "subsystem": "nvmf", 00:17:19.676 "config": [ 00:17:19.676 { 00:17:19.676 "method": "nvmf_set_config", 00:17:19.676 "params": { 00:17:19.676 "admin_cmd_passthru": { 00:17:19.676 "identify_ctrlr": false 00:17:19.676 }, 00:17:19.676 "discovery_filter": "match_any" 00:17:19.676 } 00:17:19.676 }, 00:17:19.676 { 00:17:19.676 "method": "nvmf_set_max_subsystems", 00:17:19.676 "params": { 00:17:19.676 "max_subsystems": 1024 00:17:19.676 } 00:17:19.676 }, 00:17:19.676 { 00:17:19.676 "method": "nvmf_set_crdt", 00:17:19.676 "params": { 00:17:19.676 "crdt1": 0, 00:17:19.676 "crdt2": 0, 00:17:19.676 "crdt3": 0 00:17:19.676 } 00:17:19.676 }, 00:17:19.676 { 00:17:19.676 "method": "nvmf_create_transport", 00:17:19.676 "params": { 00:17:19.676 "abort_timeout_sec": 1, 00:17:19.676 "buf_cache_size": 4294967295, 00:17:19.676 "c2h_success": false, 00:17:19.676 "dif_insert_or_strip": false, 00:17:19.676 "in_capsule_data_size": 4096, 00:17:19.676 "io_unit_size": 131072, 00:17:19.676 "max_aq_depth": 128, 00:17:19.676 "max_io_qpairs_per_ctrlr": 127, 00:17:19.676 "max_io_size": 131072, 00:17:19.676 "max_queue_depth": 128, 00:17:19.676 "num_shared_buffers": 511, 00:17:19.676 "sock_priority": 0, 00:17:19.676 "trtype": "TCP", 00:17:19.676 "zcopy": false 00:17:19.676 } 00:17:19.676 }, 00:17:19.676 { 00:17:19.676 "method": "nvmf_create_subsystem", 00:17:19.676 "params": { 00:17:19.676 "allow_any_host": false, 00:17:19.676 "ana_reporting": false, 00:17:19.676 "max_cntlid": 65519, 00:17:19.676 "max_namespaces": 10, 00:17:19.676 "min_cntlid": 1, 00:17:19.676 "model_number": "SPDK bdev Controller", 00:17:19.676 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:19.676 "serial_number": "SPDK00000000000001" 00:17:19.676 } 00:17:19.676 }, 00:17:19.676 { 00:17:19.676 "method": "nvmf_subsystem_add_host", 00:17:19.676 "params": { 00:17:19.676 "host": "nqn.2016-06.io.spdk:host1", 00:17:19.676 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:19.676 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:19.676 } 00:17:19.676 }, 00:17:19.676 { 00:17:19.676 "method": "nvmf_subsystem_add_ns", 00:17:19.676 "params": { 00:17:19.676 "namespace": { 00:17:19.676 "bdev_name": "malloc0", 00:17:19.676 "nguid": "6783C39CC0F6442C8AAC3004B4A6C74F", 00:17:19.676 "nsid": 1, 00:17:19.676 "uuid": "6783c39c-c0f6-442c-8aac-3004b4a6c74f" 00:17:19.676 }, 00:17:19.676 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:19.676 } 00:17:19.676 }, 00:17:19.676 { 00:17:19.676 "method": "nvmf_subsystem_add_listener", 00:17:19.676 "params": { 00:17:19.676 "listen_address": { 00:17:19.676 "adrfam": "IPv4", 00:17:19.676 "traddr": "10.0.0.2", 00:17:19.676 "trsvcid": "4420", 00:17:19.676 "trtype": "TCP" 00:17:19.676 }, 00:17:19.676 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:19.676 "secure_channel": true 00:17:19.676 } 00:17:19.676 } 00:17:19.676 ] 00:17:19.676 } 00:17:19.676 ] 00:17:19.676 }' 00:17:19.676 07:23:21 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:19.935 07:23:21 -- target/tls.sh@206 -- # bdevperfconf='{ 00:17:19.935 "subsystems": [ 00:17:19.935 { 00:17:19.935 "subsystem": "iobuf", 00:17:19.935 "config": [ 00:17:19.935 { 00:17:19.935 "method": "iobuf_set_options", 00:17:19.935 "params": { 00:17:19.935 "large_bufsize": 135168, 00:17:19.935 "large_pool_count": 1024, 00:17:19.935 "small_bufsize": 8192, 00:17:19.935 "small_pool_count": 8192 00:17:19.935 } 00:17:19.935 } 00:17:19.935 ] 00:17:19.935 }, 00:17:19.935 { 00:17:19.935 "subsystem": "sock", 00:17:19.935 "config": [ 00:17:19.935 { 00:17:19.935 "method": "sock_impl_set_options", 00:17:19.935 "params": { 00:17:19.935 "enable_ktls": false, 00:17:19.935 "enable_placement_id": 0, 00:17:19.935 "enable_quickack": false, 00:17:19.935 "enable_recv_pipe": true, 00:17:19.935 "enable_zerocopy_send_client": false, 00:17:19.935 "enable_zerocopy_send_server": true, 00:17:19.935 "impl_name": "posix", 00:17:19.935 "recv_buf_size": 2097152, 00:17:19.935 "send_buf_size": 2097152, 00:17:19.935 "tls_version": 0, 00:17:19.935 "zerocopy_threshold": 0 00:17:19.935 } 00:17:19.935 }, 00:17:19.935 { 00:17:19.935 "method": "sock_impl_set_options", 00:17:19.935 "params": { 00:17:19.935 "enable_ktls": false, 00:17:19.935 "enable_placement_id": 0, 00:17:19.935 "enable_quickack": false, 00:17:19.935 "enable_recv_pipe": true, 00:17:19.935 "enable_zerocopy_send_client": false, 00:17:19.935 "enable_zerocopy_send_server": true, 00:17:19.935 "impl_name": "ssl", 00:17:19.935 "recv_buf_size": 4096, 00:17:19.935 "send_buf_size": 4096, 00:17:19.935 "tls_version": 0, 00:17:19.935 "zerocopy_threshold": 0 00:17:19.935 } 00:17:19.935 } 00:17:19.935 ] 00:17:19.935 }, 00:17:19.935 { 00:17:19.935 "subsystem": "vmd", 00:17:19.935 "config": [] 00:17:19.935 }, 00:17:19.935 { 00:17:19.935 "subsystem": "accel", 00:17:19.935 "config": [ 00:17:19.935 { 00:17:19.935 "method": "accel_set_options", 00:17:19.935 "params": { 00:17:19.935 "buf_count": 2048, 00:17:19.935 "large_cache_size": 16, 00:17:19.935 "sequence_count": 2048, 00:17:19.935 "small_cache_size": 128, 00:17:19.935 "task_count": 2048 00:17:19.935 } 00:17:19.935 } 00:17:19.935 ] 00:17:19.935 }, 00:17:19.935 { 00:17:19.935 "subsystem": "bdev", 00:17:19.935 "config": [ 00:17:19.935 { 00:17:19.935 "method": "bdev_set_options", 00:17:19.935 "params": { 00:17:19.935 "bdev_auto_examine": true, 00:17:19.935 "bdev_io_cache_size": 256, 00:17:19.935 "bdev_io_pool_size": 65535, 00:17:19.935 "iobuf_large_cache_size": 16, 00:17:19.936 "iobuf_small_cache_size": 128 00:17:19.936 } 00:17:19.936 }, 00:17:19.936 { 00:17:19.936 "method": "bdev_raid_set_options", 00:17:19.936 "params": { 00:17:19.936 "process_window_size_kb": 1024 00:17:19.936 } 00:17:19.936 }, 00:17:19.936 { 00:17:19.936 "method": "bdev_iscsi_set_options", 00:17:19.936 "params": { 00:17:19.936 "timeout_sec": 30 00:17:19.936 } 00:17:19.936 }, 00:17:19.936 { 00:17:19.936 "method": "bdev_nvme_set_options", 00:17:19.936 "params": { 00:17:19.936 "action_on_timeout": "none", 00:17:19.936 "allow_accel_sequence": false, 00:17:19.936 "arbitration_burst": 0, 00:17:19.936 "bdev_retry_count": 3, 00:17:19.936 "ctrlr_loss_timeout_sec": 0, 00:17:19.936 "delay_cmd_submit": true, 00:17:19.936 "fast_io_fail_timeout_sec": 0, 00:17:19.936 "generate_uuids": false, 00:17:19.936 "high_priority_weight": 0, 00:17:19.936 "io_path_stat": false, 00:17:19.936 "io_queue_requests": 512, 00:17:19.936 "keep_alive_timeout_ms": 10000, 00:17:19.936 "low_priority_weight": 0, 00:17:19.936 "medium_priority_weight": 0, 00:17:19.936 "nvme_adminq_poll_period_us": 10000, 00:17:19.936 "nvme_ioq_poll_period_us": 0, 00:17:19.936 "reconnect_delay_sec": 0, 00:17:19.936 "timeout_admin_us": 0, 00:17:19.936 "timeout_us": 0, 00:17:19.936 "transport_ack_timeout": 0, 00:17:19.936 "transport_retry_count": 4, 00:17:19.936 "transport_tos": 0 00:17:19.936 } 00:17:19.936 }, 00:17:19.936 { 00:17:19.936 "method": "bdev_nvme_attach_controller", 00:17:19.936 "params": { 00:17:19.936 "adrfam": "IPv4", 00:17:19.936 "ctrlr_loss_timeout_sec": 0, 00:17:19.936 "ddgst": false, 00:17:19.936 "fast_io_fail_timeout_sec": 0, 00:17:19.936 "hdgst": false, 00:17:19.936 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:19.936 "name": "TLSTEST", 00:17:19.936 "prchk_guard": false, 00:17:19.936 "prchk_reftag": false, 00:17:19.936 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:19.936 "reconnect_delay_sec": 0, 00:17:19.936 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:19.936 "traddr": "10.0.0.2", 00:17:19.936 "trsvcid": "4420", 00:17:19.936 "trtype": "TCP" 00:17:19.936 } 00:17:19.936 }, 00:17:19.936 { 00:17:19.936 "method": "bdev_nvme_set_hotplug", 00:17:19.936 "params": { 00:17:19.936 "enable": false, 00:17:19.936 "period_us": 100000 00:17:19.936 } 00:17:19.936 }, 00:17:19.936 { 00:17:19.936 "method": "bdev_wait_for_examine" 00:17:19.936 } 00:17:19.936 ] 00:17:19.936 }, 00:17:19.936 { 00:17:19.936 "subsystem": "nbd", 00:17:19.936 "config": [] 00:17:19.936 } 00:17:19.936 ] 00:17:19.936 }' 00:17:19.936 07:23:21 -- target/tls.sh@208 -- # killprocess 89269 00:17:19.936 07:23:21 -- common/autotest_common.sh@926 -- # '[' -z 89269 ']' 00:17:19.936 07:23:21 -- common/autotest_common.sh@930 -- # kill -0 89269 00:17:19.936 07:23:21 -- common/autotest_common.sh@931 -- # uname 00:17:19.936 07:23:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:19.936 07:23:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89269 00:17:19.936 07:23:21 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:19.936 killing process with pid 89269 00:17:19.936 07:23:21 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:19.936 07:23:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89269' 00:17:19.936 07:23:21 -- common/autotest_common.sh@945 -- # kill 89269 00:17:19.936 Received shutdown signal, test time was about 10.000000 seconds 00:17:19.936 00:17:19.936 Latency(us) 00:17:19.936 [2024-11-04T07:23:21.777Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.936 [2024-11-04T07:23:21.777Z] =================================================================================================================== 00:17:19.936 [2024-11-04T07:23:21.777Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:19.936 07:23:21 -- common/autotest_common.sh@950 -- # wait 89269 00:17:20.195 07:23:21 -- target/tls.sh@209 -- # killprocess 89168 00:17:20.195 07:23:21 -- common/autotest_common.sh@926 -- # '[' -z 89168 ']' 00:17:20.195 07:23:21 -- common/autotest_common.sh@930 -- # kill -0 89168 00:17:20.195 07:23:21 -- common/autotest_common.sh@931 -- # uname 00:17:20.195 07:23:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:20.195 07:23:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89168 00:17:20.195 07:23:21 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:20.195 killing process with pid 89168 00:17:20.195 07:23:21 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:20.195 07:23:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89168' 00:17:20.195 07:23:21 -- common/autotest_common.sh@945 -- # kill 89168 00:17:20.195 07:23:21 -- common/autotest_common.sh@950 -- # wait 89168 00:17:20.454 07:23:22 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:20.454 07:23:22 -- target/tls.sh@212 -- # echo '{ 00:17:20.454 "subsystems": [ 00:17:20.454 { 00:17:20.454 "subsystem": "iobuf", 00:17:20.454 "config": [ 00:17:20.454 { 00:17:20.454 "method": "iobuf_set_options", 00:17:20.454 "params": { 00:17:20.454 "large_bufsize": 135168, 00:17:20.454 "large_pool_count": 1024, 00:17:20.454 "small_bufsize": 8192, 00:17:20.454 "small_pool_count": 8192 00:17:20.454 } 00:17:20.454 } 00:17:20.454 ] 00:17:20.454 }, 00:17:20.454 { 00:17:20.454 "subsystem": "sock", 00:17:20.454 "config": [ 00:17:20.454 { 00:17:20.454 "method": "sock_impl_set_options", 00:17:20.454 "params": { 00:17:20.454 "enable_ktls": false, 00:17:20.454 "enable_placement_id": 0, 00:17:20.454 "enable_quickack": false, 00:17:20.454 "enable_recv_pipe": true, 00:17:20.454 "enable_zerocopy_send_client": false, 00:17:20.454 "enable_zerocopy_send_server": true, 00:17:20.454 "impl_name": "posix", 00:17:20.454 "recv_buf_size": 2097152, 00:17:20.454 "send_buf_size": 2097152, 00:17:20.454 "tls_version": 0, 00:17:20.454 "zerocopy_threshold": 0 00:17:20.454 } 00:17:20.454 }, 00:17:20.454 { 00:17:20.454 "method": "sock_impl_set_options", 00:17:20.454 "params": { 00:17:20.454 "enable_ktls": false, 00:17:20.454 "enable_placement_id": 0, 00:17:20.454 "enable_quickack": false, 00:17:20.454 "enable_recv_pipe": true, 00:17:20.454 "enable_zerocopy_send_client": false, 00:17:20.454 "enable_zerocopy_send_server": true, 00:17:20.454 "impl_name": "ssl", 00:17:20.454 "recv_buf_size": 4096, 00:17:20.454 "send_buf_size": 4096, 00:17:20.454 "tls_version": 0, 00:17:20.454 "zerocopy_threshold": 0 00:17:20.454 } 00:17:20.455 } 00:17:20.455 ] 00:17:20.455 }, 00:17:20.455 { 00:17:20.455 "subsystem": "vmd", 00:17:20.455 "config": [] 00:17:20.455 }, 00:17:20.455 { 00:17:20.455 "subsystem": "accel", 00:17:20.455 "config": [ 00:17:20.455 { 00:17:20.455 "method": "accel_set_options", 00:17:20.455 "params": { 00:17:20.455 "buf_count": 2048, 00:17:20.455 "large_cache_size": 16, 00:17:20.455 "sequence_count": 2048, 00:17:20.455 "small_cache_size": 128, 00:17:20.455 "task_count": 2048 00:17:20.455 } 00:17:20.455 } 00:17:20.455 ] 00:17:20.455 }, 00:17:20.455 { 00:17:20.455 "subsystem": "bdev", 00:17:20.455 "config": [ 00:17:20.455 { 00:17:20.455 "method": "bdev_set_options", 00:17:20.455 "params": { 00:17:20.455 "bdev_auto_examine": true, 00:17:20.455 "bdev_io_cache_size": 256, 00:17:20.455 "bdev_io_pool_size": 65535, 00:17:20.455 "iobuf_large_cache_size": 16, 00:17:20.455 "iobuf_small_cache_size": 128 00:17:20.455 } 00:17:20.455 }, 00:17:20.455 { 00:17:20.455 "method": "bdev_raid_set_options", 00:17:20.455 "params": { 00:17:20.455 "process_window_size_kb": 1024 00:17:20.455 } 00:17:20.455 }, 00:17:20.455 { 00:17:20.455 "method": "bdev_iscsi_set_options", 00:17:20.455 "params": { 00:17:20.455 "timeout_sec": 30 00:17:20.455 } 00:17:20.455 }, 00:17:20.455 { 00:17:20.455 "method": "bdev_nvme_set_options", 00:17:20.455 "params": { 00:17:20.455 "action_on_timeout": "none", 00:17:20.455 "allow_accel_sequence": false, 00:17:20.455 "arbitration_burst": 0, 00:17:20.455 "bdev_retry_count": 3, 00:17:20.455 "ctrlr_loss_timeout_sec": 0, 00:17:20.455 "delay_cmd_submit": true, 00:17:20.455 "fast_io_fail_timeout_sec": 0, 00:17:20.455 "generate_uuids": false, 00:17:20.455 "high_priority_weight": 0, 00:17:20.455 "io_path_stat": false, 00:17:20.455 "io_queue_requests": 0, 00:17:20.455 "keep_alive_timeout_ms": 10000, 00:17:20.455 "low_priority_weight": 0, 00:17:20.455 "medium_priority_weight": 0, 00:17:20.455 "nvme_adminq_poll_period_us": 10000, 00:17:20.455 "nvme_ioq_poll_period_us": 0, 00:17:20.455 "reconnect_delay_sec": 0, 00:17:20.455 "timeout_admin_us": 0, 00:17:20.455 "timeout_us": 0, 00:17:20.455 "transport_ack_timeout": 0, 00:17:20.455 "transport_retry_count": 4, 00:17:20.455 "transport_tos": 0 00:17:20.455 } 00:17:20.455 }, 00:17:20.455 { 00:17:20.455 "method": "bdev_nvme_set_hotplug", 00:17:20.455 "params": { 00:17:20.455 "enable": false, 00:17:20.455 "period_us": 100000 00:17:20.455 } 00:17:20.455 }, 00:17:20.455 { 00:17:20.455 "method": "bdev_malloc_create", 00:17:20.455 "params": { 00:17:20.455 "block_size": 4096, 00:17:20.455 "name": "malloc0", 00:17:20.455 "num_blocks": 8192, 00:17:20.455 "optimal_io_boundary": 0, 00:17:20.455 "physical_block_size": 4096, 00:17:20.455 "uuid": "6783c39c-c0f6-442c-8aac-3004b4a6c74f" 00:17:20.455 } 00:17:20.455 }, 00:17:20.455 { 00:17:20.455 "method": "bdev_wait_for_examine" 00:17:20.455 } 00:17:20.455 ] 00:17:20.455 }, 00:17:20.455 { 00:17:20.455 "subsystem": "nbd", 00:17:20.455 "config": [] 00:17:20.455 }, 00:17:20.455 { 00:17:20.455 "subsystem": "scheduler", 00:17:20.455 "config": [ 00:17:20.455 { 00:17:20.455 "method": "framework_set_scheduler", 00:17:20.455 "params": { 00:17:20.455 "name": "static" 00:17:20.455 } 00:17:20.455 } 00:17:20.455 ] 00:17:20.455 }, 00:17:20.455 { 00:17:20.455 "subsystem": "nvmf", 00:17:20.455 "config": [ 00:17:20.455 { 00:17:20.455 "method": "nvmf_set_config", 00:17:20.455 "params": { 00:17:20.455 "admin_cmd_passthru": { 00:17:20.455 "identify_ctrlr": false 00:17:20.455 }, 00:17:20.455 "discovery_filter": "match_any" 00:17:20.455 } 00:17:20.455 }, 00:17:20.455 { 00:17:20.455 "method": "nvmf_set_max_subsystems", 00:17:20.455 "params": { 00:17:20.455 "max_subsystems": 1024 00:17:20.455 } 00:17:20.455 }, 00:17:20.455 { 00:17:20.455 "method": "nvmf_set_crdt", 00:17:20.455 "params": { 00:17:20.455 "crdt1": 0, 00:17:20.455 "crdt2": 0, 00:17:20.455 "crdt3": 0 00:17:20.455 } 00:17:20.455 }, 00:17:20.455 { 00:17:20.455 "method": "nvmf_create_transport", 00:17:20.455 "params": { 00:17:20.455 "abort_timeout_sec": 1, 00:17:20.455 "buf_cache_size": 4294967295, 00:17:20.455 "c2h_success": false, 00:17:20.455 "dif_insert_or_strip": false, 00:17:20.455 "in_capsule_data_size": 4096, 00:17:20.455 "io_unit_size": 131072, 00:17:20.455 "max_aq_depth": 128, 00:17:20.455 "max_io_qpairs_per_ctrlr": 127, 00:17:20.455 "max_io_size": 131072, 00:17:20.455 "max_queue_depth": 128, 00:17:20.455 "num_shared_buffers": 511, 00:17:20.455 "sock_priority": 0, 00:17:20.455 "trtype": "TCP", 00:17:20.455 "zcopy": false 00:17:20.455 } 00:17:20.455 }, 00:17:20.455 { 00:17:20.455 "method": "nvmf_create_subsystem", 00:17:20.455 "params": { 00:17:20.455 "allow_any_host": false, 00:17:20.455 "ana_reporting": false, 00:17:20.455 "max_cntlid": 65519, 00:17:20.455 "max_namespaces": 10, 00:17:20.455 "min_cntlid": 1, 00:17:20.455 "model_number": "SPDK bdev Controller", 00:17:20.455 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:20.455 "serial_number": "SPDK00000000000001" 00:17:20.455 } 00:17:20.455 }, 00:17:20.455 { 00:17:20.455 "method": "nvmf_subsystem_add_host", 00:17:20.455 "params": { 00:17:20.455 "host": "nqn.2016-06.io.spdk:host1", 00:17:20.455 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:20.455 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:20.455 } 00:17:20.455 }, 00:17:20.455 { 00:17:20.455 "method": "nvmf_subsystem_add_ns", 00:17:20.455 "params": { 00:17:20.455 "namespace": { 00:17:20.455 "bdev_name": "malloc0", 00:17:20.455 "nguid": "6783C39CC0F6442C8AAC3004B4A6C74F", 00:17:20.455 "nsid": 1, 00:17:20.455 "uuid": "6783c39c-c0f6-442c-8aac-3004b4a6c74f" 00:17:20.455 }, 00:17:20.455 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:20.455 } 00:17:20.455 }, 00:17:20.455 { 00:17:20.455 "method": "nvmf_subsystem_add_listener", 00:17:20.455 "params": { 00:17:20.455 "listen_address": { 00:17:20.455 "adrfam": "IPv4", 00:17:20.455 "traddr": "10.0.0.2", 00:17:20.455 "trsvcid": "4420", 00:17:20.455 "trtype": "TCP" 00:17:20.455 }, 00:17:20.455 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:20.455 "secure_channel": true 00:17:20.455 } 00:17:20.455 } 00:17:20.455 ] 00:17:20.455 } 00:17:20.455 ] 00:17:20.455 }' 00:17:20.455 07:23:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:20.455 07:23:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:20.455 07:23:22 -- common/autotest_common.sh@10 -- # set +x 00:17:20.455 07:23:22 -- nvmf/common.sh@469 -- # nvmfpid=89345 00:17:20.455 07:23:22 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:20.455 07:23:22 -- nvmf/common.sh@470 -- # waitforlisten 89345 00:17:20.455 07:23:22 -- common/autotest_common.sh@819 -- # '[' -z 89345 ']' 00:17:20.455 07:23:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.455 07:23:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:20.455 07:23:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.455 07:23:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:20.455 07:23:22 -- common/autotest_common.sh@10 -- # set +x 00:17:20.455 [2024-11-04 07:23:22.141086] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:20.455 [2024-11-04 07:23:22.141183] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.455 [2024-11-04 07:23:22.280302] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.714 [2024-11-04 07:23:22.344107] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:20.714 [2024-11-04 07:23:22.344540] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.714 [2024-11-04 07:23:22.344629] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.714 [2024-11-04 07:23:22.344691] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.714 [2024-11-04 07:23:22.344784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.973 [2024-11-04 07:23:22.588998] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:20.973 [2024-11-04 07:23:22.620959] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:20.973 [2024-11-04 07:23:22.621196] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:21.540 07:23:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:21.540 07:23:23 -- common/autotest_common.sh@852 -- # return 0 00:17:21.540 07:23:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:21.540 07:23:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:21.540 07:23:23 -- common/autotest_common.sh@10 -- # set +x 00:17:21.540 07:23:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:21.540 07:23:23 -- target/tls.sh@216 -- # bdevperf_pid=89389 00:17:21.540 07:23:23 -- target/tls.sh@217 -- # waitforlisten 89389 /var/tmp/bdevperf.sock 00:17:21.540 07:23:23 -- common/autotest_common.sh@819 -- # '[' -z 89389 ']' 00:17:21.540 07:23:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:21.540 07:23:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:21.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:21.540 07:23:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:21.540 07:23:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:21.540 07:23:23 -- common/autotest_common.sh@10 -- # set +x 00:17:21.540 07:23:23 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:21.540 07:23:23 -- target/tls.sh@213 -- # echo '{ 00:17:21.540 "subsystems": [ 00:17:21.540 { 00:17:21.540 "subsystem": "iobuf", 00:17:21.540 "config": [ 00:17:21.540 { 00:17:21.540 "method": "iobuf_set_options", 00:17:21.540 "params": { 00:17:21.540 "large_bufsize": 135168, 00:17:21.540 "large_pool_count": 1024, 00:17:21.540 "small_bufsize": 8192, 00:17:21.540 "small_pool_count": 8192 00:17:21.540 } 00:17:21.540 } 00:17:21.540 ] 00:17:21.540 }, 00:17:21.540 { 00:17:21.540 "subsystem": "sock", 00:17:21.540 "config": [ 00:17:21.540 { 00:17:21.540 "method": "sock_impl_set_options", 00:17:21.540 "params": { 00:17:21.540 "enable_ktls": false, 00:17:21.540 "enable_placement_id": 0, 00:17:21.540 "enable_quickack": false, 00:17:21.540 "enable_recv_pipe": true, 00:17:21.541 "enable_zerocopy_send_client": false, 00:17:21.541 "enable_zerocopy_send_server": true, 00:17:21.541 "impl_name": "posix", 00:17:21.541 "recv_buf_size": 2097152, 00:17:21.541 "send_buf_size": 2097152, 00:17:21.541 "tls_version": 0, 00:17:21.541 "zerocopy_threshold": 0 00:17:21.541 } 00:17:21.541 }, 00:17:21.541 { 00:17:21.541 "method": "sock_impl_set_options", 00:17:21.541 "params": { 00:17:21.541 "enable_ktls": false, 00:17:21.541 "enable_placement_id": 0, 00:17:21.541 "enable_quickack": false, 00:17:21.541 "enable_recv_pipe": true, 00:17:21.541 "enable_zerocopy_send_client": false, 00:17:21.541 "enable_zerocopy_send_server": true, 00:17:21.541 "impl_name": "ssl", 00:17:21.541 "recv_buf_size": 4096, 00:17:21.541 "send_buf_size": 4096, 00:17:21.541 "tls_version": 0, 00:17:21.541 "zerocopy_threshold": 0 00:17:21.541 } 00:17:21.541 } 00:17:21.541 ] 00:17:21.541 }, 00:17:21.541 { 00:17:21.541 "subsystem": "vmd", 00:17:21.541 "config": [] 00:17:21.541 }, 00:17:21.541 { 00:17:21.541 "subsystem": "accel", 00:17:21.541 "config": [ 00:17:21.541 { 00:17:21.541 "method": "accel_set_options", 00:17:21.541 "params": { 00:17:21.541 "buf_count": 2048, 00:17:21.541 "large_cache_size": 16, 00:17:21.541 "sequence_count": 2048, 00:17:21.541 "small_cache_size": 128, 00:17:21.541 "task_count": 2048 00:17:21.541 } 00:17:21.541 } 00:17:21.541 ] 00:17:21.541 }, 00:17:21.541 { 00:17:21.541 "subsystem": "bdev", 00:17:21.541 "config": [ 00:17:21.541 { 00:17:21.541 "method": "bdev_set_options", 00:17:21.541 "params": { 00:17:21.541 "bdev_auto_examine": true, 00:17:21.541 "bdev_io_cache_size": 256, 00:17:21.541 "bdev_io_pool_size": 65535, 00:17:21.541 "iobuf_large_cache_size": 16, 00:17:21.541 "iobuf_small_cache_size": 128 00:17:21.541 } 00:17:21.541 }, 00:17:21.541 { 00:17:21.541 "method": "bdev_raid_set_options", 00:17:21.541 "params": { 00:17:21.541 "process_window_size_kb": 1024 00:17:21.541 } 00:17:21.541 }, 00:17:21.541 { 00:17:21.541 "method": "bdev_iscsi_set_options", 00:17:21.541 "params": { 00:17:21.541 "timeout_sec": 30 00:17:21.541 } 00:17:21.541 }, 00:17:21.541 { 00:17:21.541 "method": "bdev_nvme_set_options", 00:17:21.541 "params": { 00:17:21.541 "action_on_timeout": "none", 00:17:21.541 "allow_accel_sequence": false, 00:17:21.541 "arbitration_burst": 0, 00:17:21.541 "bdev_retry_count": 3, 00:17:21.541 "ctrlr_loss_timeout_sec": 0, 00:17:21.541 "delay_cmd_submit": true, 00:17:21.541 "fast_io_fail_timeout_sec": 0, 00:17:21.541 "generate_uuids": false, 00:17:21.541 "high_priority_weight": 0, 00:17:21.541 "io_path_stat": false, 00:17:21.541 "io_queue_requests": 512, 00:17:21.541 "keep_alive_timeout_ms": 10000, 00:17:21.541 "low_priority_weight": 0, 00:17:21.541 "medium_priority_weight": 0, 00:17:21.541 "nvme_adminq_poll_period_us": 10000, 00:17:21.541 "nvme_ioq_poll_period_us": 0, 00:17:21.541 "reconnect_delay_sec": 0, 00:17:21.541 "timeout_admin_us": 0, 00:17:21.541 "timeout_us": 0, 00:17:21.541 "transport_ack_timeout": 0, 00:17:21.541 "transport_retry_count": 4, 00:17:21.541 "transport_tos": 0 00:17:21.541 } 00:17:21.541 }, 00:17:21.541 { 00:17:21.541 "method": "bdev_nvme_attach_controller", 00:17:21.541 "params": { 00:17:21.541 "adrfam": "IPv4", 00:17:21.541 "ctrlr_loss_timeout_sec": 0, 00:17:21.541 "ddgst": false, 00:17:21.541 "fast_io_fail_timeout_sec": 0, 00:17:21.541 "hdgst": false, 00:17:21.541 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:21.541 "name": "TLSTEST", 00:17:21.541 "prchk_guard": false, 00:17:21.541 "prchk_reftag": false, 00:17:21.541 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:21.541 "reconnect_delay_sec": 0, 00:17:21.541 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:21.541 "traddr": "10.0.0.2", 00:17:21.541 "trsvcid": "4420", 00:17:21.541 "trtype": "TCP" 00:17:21.541 } 00:17:21.541 }, 00:17:21.541 { 00:17:21.541 "method": "bdev_nvme_set_hotplug", 00:17:21.541 "params": { 00:17:21.541 "enable": false, 00:17:21.541 "period_us": 100000 00:17:21.541 } 00:17:21.541 }, 00:17:21.541 { 00:17:21.541 "method": "bdev_wait_for_examine" 00:17:21.541 } 00:17:21.541 ] 00:17:21.541 }, 00:17:21.541 { 00:17:21.541 "subsystem": "nbd", 00:17:21.541 "config": [] 00:17:21.541 } 00:17:21.541 ] 00:17:21.541 }' 00:17:21.541 [2024-11-04 07:23:23.173992] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:21.541 [2024-11-04 07:23:23.174085] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89389 ] 00:17:21.541 [2024-11-04 07:23:23.314552] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.541 [2024-11-04 07:23:23.369603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:21.800 [2024-11-04 07:23:23.515199] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:22.367 07:23:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:22.367 07:23:24 -- common/autotest_common.sh@852 -- # return 0 00:17:22.367 07:23:24 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:22.625 Running I/O for 10 seconds... 00:17:32.604 00:17:32.604 Latency(us) 00:17:32.605 [2024-11-04T07:23:34.446Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.605 [2024-11-04T07:23:34.446Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:32.605 Verification LBA range: start 0x0 length 0x2000 00:17:32.605 TLSTESTn1 : 10.01 6993.30 27.32 0.00 0.00 18279.71 2115.03 19541.64 00:17:32.605 [2024-11-04T07:23:34.446Z] =================================================================================================================== 00:17:32.605 [2024-11-04T07:23:34.446Z] Total : 6993.30 27.32 0.00 0.00 18279.71 2115.03 19541.64 00:17:32.605 0 00:17:32.605 07:23:34 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:32.605 07:23:34 -- target/tls.sh@223 -- # killprocess 89389 00:17:32.605 07:23:34 -- common/autotest_common.sh@926 -- # '[' -z 89389 ']' 00:17:32.605 07:23:34 -- common/autotest_common.sh@930 -- # kill -0 89389 00:17:32.605 07:23:34 -- common/autotest_common.sh@931 -- # uname 00:17:32.605 07:23:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:32.605 07:23:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89389 00:17:32.605 killing process with pid 89389 00:17:32.605 Received shutdown signal, test time was about 10.000000 seconds 00:17:32.605 00:17:32.605 Latency(us) 00:17:32.605 [2024-11-04T07:23:34.446Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.605 [2024-11-04T07:23:34.446Z] =================================================================================================================== 00:17:32.605 [2024-11-04T07:23:34.446Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:32.605 07:23:34 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:32.605 07:23:34 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:32.605 07:23:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89389' 00:17:32.605 07:23:34 -- common/autotest_common.sh@945 -- # kill 89389 00:17:32.605 07:23:34 -- common/autotest_common.sh@950 -- # wait 89389 00:17:32.864 07:23:34 -- target/tls.sh@224 -- # killprocess 89345 00:17:32.864 07:23:34 -- common/autotest_common.sh@926 -- # '[' -z 89345 ']' 00:17:32.864 07:23:34 -- common/autotest_common.sh@930 -- # kill -0 89345 00:17:32.864 07:23:34 -- common/autotest_common.sh@931 -- # uname 00:17:32.864 07:23:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:32.864 07:23:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89345 00:17:32.864 killing process with pid 89345 00:17:32.864 07:23:34 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:32.864 07:23:34 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:32.864 07:23:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89345' 00:17:32.864 07:23:34 -- common/autotest_common.sh@945 -- # kill 89345 00:17:32.864 07:23:34 -- common/autotest_common.sh@950 -- # wait 89345 00:17:33.123 07:23:34 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:17:33.123 07:23:34 -- target/tls.sh@227 -- # cleanup 00:17:33.123 07:23:34 -- target/tls.sh@15 -- # process_shm --id 0 00:17:33.123 07:23:34 -- common/autotest_common.sh@796 -- # type=--id 00:17:33.123 07:23:34 -- common/autotest_common.sh@797 -- # id=0 00:17:33.123 07:23:34 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:17:33.123 07:23:34 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:33.123 07:23:34 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:17:33.123 07:23:34 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:17:33.123 07:23:34 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:17:33.123 07:23:34 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:33.123 nvmf_trace.0 00:17:33.123 07:23:34 -- common/autotest_common.sh@811 -- # return 0 00:17:33.123 07:23:34 -- target/tls.sh@16 -- # killprocess 89389 00:17:33.123 07:23:34 -- common/autotest_common.sh@926 -- # '[' -z 89389 ']' 00:17:33.123 Process with pid 89389 is not found 00:17:33.123 07:23:34 -- common/autotest_common.sh@930 -- # kill -0 89389 00:17:33.123 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (89389) - No such process 00:17:33.123 07:23:34 -- common/autotest_common.sh@953 -- # echo 'Process with pid 89389 is not found' 00:17:33.123 07:23:34 -- target/tls.sh@17 -- # nvmftestfini 00:17:33.123 07:23:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:33.123 07:23:34 -- nvmf/common.sh@116 -- # sync 00:17:33.123 07:23:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:33.123 07:23:34 -- nvmf/common.sh@119 -- # set +e 00:17:33.123 07:23:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:33.123 07:23:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:33.123 rmmod nvme_tcp 00:17:33.382 rmmod nvme_fabrics 00:17:33.382 rmmod nvme_keyring 00:17:33.382 07:23:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:33.382 Process with pid 89345 is not found 00:17:33.382 07:23:34 -- nvmf/common.sh@123 -- # set -e 00:17:33.382 07:23:34 -- nvmf/common.sh@124 -- # return 0 00:17:33.382 07:23:34 -- nvmf/common.sh@477 -- # '[' -n 89345 ']' 00:17:33.382 07:23:34 -- nvmf/common.sh@478 -- # killprocess 89345 00:17:33.382 07:23:34 -- common/autotest_common.sh@926 -- # '[' -z 89345 ']' 00:17:33.382 07:23:34 -- common/autotest_common.sh@930 -- # kill -0 89345 00:17:33.382 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (89345) - No such process 00:17:33.382 07:23:34 -- common/autotest_common.sh@953 -- # echo 'Process with pid 89345 is not found' 00:17:33.382 07:23:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:33.382 07:23:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:33.382 07:23:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:33.382 07:23:34 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:33.382 07:23:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:33.382 07:23:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.382 07:23:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:33.382 07:23:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.382 07:23:35 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:33.382 07:23:35 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:33.382 00:17:33.382 real 1m11.174s 00:17:33.382 user 1m44.980s 00:17:33.382 sys 0m27.509s 00:17:33.382 07:23:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:33.382 07:23:35 -- common/autotest_common.sh@10 -- # set +x 00:17:33.382 ************************************ 00:17:33.382 END TEST nvmf_tls 00:17:33.382 ************************************ 00:17:33.382 07:23:35 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:33.382 07:23:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:33.382 07:23:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:33.382 07:23:35 -- common/autotest_common.sh@10 -- # set +x 00:17:33.382 ************************************ 00:17:33.382 START TEST nvmf_fips 00:17:33.382 ************************************ 00:17:33.382 07:23:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:33.382 * Looking for test storage... 00:17:33.382 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:17:33.382 07:23:35 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:33.382 07:23:35 -- nvmf/common.sh@7 -- # uname -s 00:17:33.382 07:23:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:33.382 07:23:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:33.382 07:23:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:33.382 07:23:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:33.382 07:23:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:33.382 07:23:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:33.382 07:23:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:33.382 07:23:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:33.382 07:23:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:33.382 07:23:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:33.382 07:23:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:17:33.382 07:23:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:17:33.382 07:23:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:33.382 07:23:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:33.382 07:23:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:33.382 07:23:35 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:33.382 07:23:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:33.382 07:23:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:33.382 07:23:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:33.383 07:23:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.383 07:23:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.383 07:23:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.383 07:23:35 -- paths/export.sh@5 -- # export PATH 00:17:33.383 07:23:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.383 07:23:35 -- nvmf/common.sh@46 -- # : 0 00:17:33.383 07:23:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:33.383 07:23:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:33.383 07:23:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:33.383 07:23:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:33.383 07:23:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:33.383 07:23:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:33.383 07:23:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:33.383 07:23:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:33.383 07:23:35 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:33.383 07:23:35 -- fips/fips.sh@89 -- # check_openssl_version 00:17:33.383 07:23:35 -- fips/fips.sh@83 -- # local target=3.0.0 00:17:33.383 07:23:35 -- fips/fips.sh@85 -- # openssl version 00:17:33.383 07:23:35 -- fips/fips.sh@85 -- # awk '{print $2}' 00:17:33.383 07:23:35 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:17:33.383 07:23:35 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:17:33.642 07:23:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:33.642 07:23:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:33.642 07:23:35 -- scripts/common.sh@335 -- # IFS=.-: 00:17:33.642 07:23:35 -- scripts/common.sh@335 -- # read -ra ver1 00:17:33.642 07:23:35 -- scripts/common.sh@336 -- # IFS=.-: 00:17:33.642 07:23:35 -- scripts/common.sh@336 -- # read -ra ver2 00:17:33.642 07:23:35 -- scripts/common.sh@337 -- # local 'op=>=' 00:17:33.642 07:23:35 -- scripts/common.sh@339 -- # ver1_l=3 00:17:33.642 07:23:35 -- scripts/common.sh@340 -- # ver2_l=3 00:17:33.642 07:23:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:33.642 07:23:35 -- scripts/common.sh@343 -- # case "$op" in 00:17:33.642 07:23:35 -- scripts/common.sh@347 -- # : 1 00:17:33.642 07:23:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:33.642 07:23:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:33.642 07:23:35 -- scripts/common.sh@364 -- # decimal 3 00:17:33.642 07:23:35 -- scripts/common.sh@352 -- # local d=3 00:17:33.642 07:23:35 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:33.642 07:23:35 -- scripts/common.sh@354 -- # echo 3 00:17:33.642 07:23:35 -- scripts/common.sh@364 -- # ver1[v]=3 00:17:33.642 07:23:35 -- scripts/common.sh@365 -- # decimal 3 00:17:33.642 07:23:35 -- scripts/common.sh@352 -- # local d=3 00:17:33.642 07:23:35 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:33.642 07:23:35 -- scripts/common.sh@354 -- # echo 3 00:17:33.642 07:23:35 -- scripts/common.sh@365 -- # ver2[v]=3 00:17:33.642 07:23:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:33.642 07:23:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:33.642 07:23:35 -- scripts/common.sh@363 -- # (( v++ )) 00:17:33.642 07:23:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:33.642 07:23:35 -- scripts/common.sh@364 -- # decimal 1 00:17:33.642 07:23:35 -- scripts/common.sh@352 -- # local d=1 00:17:33.642 07:23:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:33.642 07:23:35 -- scripts/common.sh@354 -- # echo 1 00:17:33.642 07:23:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:33.642 07:23:35 -- scripts/common.sh@365 -- # decimal 0 00:17:33.642 07:23:35 -- scripts/common.sh@352 -- # local d=0 00:17:33.642 07:23:35 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:33.642 07:23:35 -- scripts/common.sh@354 -- # echo 0 00:17:33.642 07:23:35 -- scripts/common.sh@365 -- # ver2[v]=0 00:17:33.642 07:23:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:33.642 07:23:35 -- scripts/common.sh@366 -- # return 0 00:17:33.642 07:23:35 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:17:33.642 07:23:35 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:33.642 07:23:35 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:17:33.642 07:23:35 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:33.642 07:23:35 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:33.642 07:23:35 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:17:33.642 07:23:35 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:17:33.642 07:23:35 -- fips/fips.sh@113 -- # build_openssl_config 00:17:33.642 07:23:35 -- fips/fips.sh@37 -- # cat 00:17:33.642 07:23:35 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:17:33.642 07:23:35 -- fips/fips.sh@58 -- # cat - 00:17:33.642 07:23:35 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:33.642 07:23:35 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:17:33.642 07:23:35 -- fips/fips.sh@116 -- # mapfile -t providers 00:17:33.642 07:23:35 -- fips/fips.sh@116 -- # openssl list -providers 00:17:33.642 07:23:35 -- fips/fips.sh@116 -- # grep name 00:17:33.642 07:23:35 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:17:33.642 07:23:35 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:17:33.642 07:23:35 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:33.642 07:23:35 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:17:33.642 07:23:35 -- fips/fips.sh@127 -- # : 00:17:33.642 07:23:35 -- common/autotest_common.sh@640 -- # local es=0 00:17:33.642 07:23:35 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:33.642 07:23:35 -- common/autotest_common.sh@628 -- # local arg=openssl 00:17:33.642 07:23:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:33.642 07:23:35 -- common/autotest_common.sh@632 -- # type -t openssl 00:17:33.642 07:23:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:33.642 07:23:35 -- common/autotest_common.sh@634 -- # type -P openssl 00:17:33.642 07:23:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:33.642 07:23:35 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:17:33.642 07:23:35 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:17:33.642 07:23:35 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:17:33.642 Error setting digest 00:17:33.642 40E20E1E627F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:17:33.642 40E20E1E627F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:17:33.642 07:23:35 -- common/autotest_common.sh@643 -- # es=1 00:17:33.642 07:23:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:33.642 07:23:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:33.642 07:23:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:33.642 07:23:35 -- fips/fips.sh@130 -- # nvmftestinit 00:17:33.642 07:23:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:33.642 07:23:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:33.642 07:23:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:33.642 07:23:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:33.642 07:23:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:33.642 07:23:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.642 07:23:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:33.642 07:23:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.642 07:23:35 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:33.642 07:23:35 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:33.642 07:23:35 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:33.642 07:23:35 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:33.642 07:23:35 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:33.642 07:23:35 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:33.642 07:23:35 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:33.642 07:23:35 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:33.642 07:23:35 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:33.642 07:23:35 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:33.642 07:23:35 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:33.642 07:23:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:33.642 07:23:35 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:33.642 07:23:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:33.642 07:23:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:33.642 07:23:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:33.642 07:23:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:33.642 07:23:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:33.642 07:23:35 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:33.642 07:23:35 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:33.642 Cannot find device "nvmf_tgt_br" 00:17:33.642 07:23:35 -- nvmf/common.sh@154 -- # true 00:17:33.642 07:23:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:33.642 Cannot find device "nvmf_tgt_br2" 00:17:33.642 07:23:35 -- nvmf/common.sh@155 -- # true 00:17:33.642 07:23:35 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:33.642 07:23:35 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:33.642 Cannot find device "nvmf_tgt_br" 00:17:33.642 07:23:35 -- nvmf/common.sh@157 -- # true 00:17:33.643 07:23:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:33.643 Cannot find device "nvmf_tgt_br2" 00:17:33.643 07:23:35 -- nvmf/common.sh@158 -- # true 00:17:33.643 07:23:35 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:33.901 07:23:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:33.901 07:23:35 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:33.901 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:33.901 07:23:35 -- nvmf/common.sh@161 -- # true 00:17:33.901 07:23:35 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:33.901 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:33.901 07:23:35 -- nvmf/common.sh@162 -- # true 00:17:33.901 07:23:35 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:33.901 07:23:35 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:33.901 07:23:35 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:33.901 07:23:35 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:33.901 07:23:35 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:33.901 07:23:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:33.901 07:23:35 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:33.901 07:23:35 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:33.901 07:23:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:33.901 07:23:35 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:33.901 07:23:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:33.901 07:23:35 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:33.901 07:23:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:33.901 07:23:35 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:33.901 07:23:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:33.901 07:23:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:33.901 07:23:35 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:33.901 07:23:35 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:33.901 07:23:35 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:33.901 07:23:35 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:33.901 07:23:35 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:33.901 07:23:35 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:33.901 07:23:35 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:33.902 07:23:35 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:33.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:33.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:17:33.902 00:17:33.902 --- 10.0.0.2 ping statistics --- 00:17:33.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.902 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:17:33.902 07:23:35 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:33.902 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:33.902 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:17:33.902 00:17:33.902 --- 10.0.0.3 ping statistics --- 00:17:33.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.902 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:17:33.902 07:23:35 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:33.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:33.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:33.902 00:17:33.902 --- 10.0.0.1 ping statistics --- 00:17:33.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.902 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:33.902 07:23:35 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:33.902 07:23:35 -- nvmf/common.sh@421 -- # return 0 00:17:33.902 07:23:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:33.902 07:23:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:33.902 07:23:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:33.902 07:23:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:33.902 07:23:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:33.902 07:23:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:33.902 07:23:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:33.902 07:23:35 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:17:33.902 07:23:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:33.902 07:23:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:33.902 07:23:35 -- common/autotest_common.sh@10 -- # set +x 00:17:33.902 07:23:35 -- nvmf/common.sh@469 -- # nvmfpid=89752 00:17:33.902 07:23:35 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:33.902 07:23:35 -- nvmf/common.sh@470 -- # waitforlisten 89752 00:17:33.902 07:23:35 -- common/autotest_common.sh@819 -- # '[' -z 89752 ']' 00:17:33.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.902 07:23:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.902 07:23:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:33.902 07:23:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.902 07:23:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:33.902 07:23:35 -- common/autotest_common.sh@10 -- # set +x 00:17:34.160 [2024-11-04 07:23:35.806832] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:34.160 [2024-11-04 07:23:35.807102] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:34.160 [2024-11-04 07:23:35.947334] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.419 [2024-11-04 07:23:36.020211] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:34.419 [2024-11-04 07:23:36.020362] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:34.419 [2024-11-04 07:23:36.020375] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:34.419 [2024-11-04 07:23:36.020384] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:34.419 [2024-11-04 07:23:36.020419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.985 07:23:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:34.985 07:23:36 -- common/autotest_common.sh@852 -- # return 0 00:17:34.985 07:23:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:34.985 07:23:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:34.985 07:23:36 -- common/autotest_common.sh@10 -- # set +x 00:17:35.244 07:23:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:35.244 07:23:36 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:17:35.244 07:23:36 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:35.244 07:23:36 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:35.244 07:23:36 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:35.244 07:23:36 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:35.244 07:23:36 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:35.244 07:23:36 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:35.244 07:23:36 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:35.503 [2024-11-04 07:23:37.101998] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:35.503 [2024-11-04 07:23:37.117971] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:35.503 [2024-11-04 07:23:37.118191] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:35.503 malloc0 00:17:35.503 07:23:37 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:35.503 07:23:37 -- fips/fips.sh@147 -- # bdevperf_pid=89805 00:17:35.503 07:23:37 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:35.503 07:23:37 -- fips/fips.sh@148 -- # waitforlisten 89805 /var/tmp/bdevperf.sock 00:17:35.503 07:23:37 -- common/autotest_common.sh@819 -- # '[' -z 89805 ']' 00:17:35.503 07:23:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:35.503 07:23:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:35.503 07:23:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:35.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:35.503 07:23:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:35.503 07:23:37 -- common/autotest_common.sh@10 -- # set +x 00:17:35.503 [2024-11-04 07:23:37.257490] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:35.503 [2024-11-04 07:23:37.257755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89805 ] 00:17:35.761 [2024-11-04 07:23:37.399150] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.761 [2024-11-04 07:23:37.471912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:36.696 07:23:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:36.696 07:23:38 -- common/autotest_common.sh@852 -- # return 0 00:17:36.696 07:23:38 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:36.696 [2024-11-04 07:23:38.411535] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:36.696 TLSTESTn1 00:17:36.697 07:23:38 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:36.955 Running I/O for 10 seconds... 00:17:46.924 00:17:46.924 Latency(us) 00:17:46.924 [2024-11-04T07:23:48.765Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:46.924 [2024-11-04T07:23:48.765Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:46.924 Verification LBA range: start 0x0 length 0x2000 00:17:46.924 TLSTESTn1 : 10.01 6881.25 26.88 0.00 0.00 18577.19 2189.50 24427.05 00:17:46.924 [2024-11-04T07:23:48.765Z] =================================================================================================================== 00:17:46.924 [2024-11-04T07:23:48.765Z] Total : 6881.25 26.88 0.00 0.00 18577.19 2189.50 24427.05 00:17:46.924 0 00:17:46.924 07:23:48 -- fips/fips.sh@1 -- # cleanup 00:17:46.924 07:23:48 -- fips/fips.sh@15 -- # process_shm --id 0 00:17:46.924 07:23:48 -- common/autotest_common.sh@796 -- # type=--id 00:17:46.924 07:23:48 -- common/autotest_common.sh@797 -- # id=0 00:17:46.924 07:23:48 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:17:46.924 07:23:48 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:46.924 07:23:48 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:17:46.924 07:23:48 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:17:46.924 07:23:48 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:17:46.924 07:23:48 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:46.924 nvmf_trace.0 00:17:46.924 07:23:48 -- common/autotest_common.sh@811 -- # return 0 00:17:46.924 07:23:48 -- fips/fips.sh@16 -- # killprocess 89805 00:17:46.924 07:23:48 -- common/autotest_common.sh@926 -- # '[' -z 89805 ']' 00:17:46.924 07:23:48 -- common/autotest_common.sh@930 -- # kill -0 89805 00:17:46.924 07:23:48 -- common/autotest_common.sh@931 -- # uname 00:17:46.924 07:23:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:46.924 07:23:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89805 00:17:46.924 07:23:48 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:46.924 07:23:48 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:46.924 killing process with pid 89805 00:17:46.924 07:23:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89805' 00:17:46.924 Received shutdown signal, test time was about 10.000000 seconds 00:17:46.924 00:17:46.924 Latency(us) 00:17:46.924 [2024-11-04T07:23:48.765Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:46.924 [2024-11-04T07:23:48.765Z] =================================================================================================================== 00:17:46.924 [2024-11-04T07:23:48.765Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:46.924 07:23:48 -- common/autotest_common.sh@945 -- # kill 89805 00:17:46.924 07:23:48 -- common/autotest_common.sh@950 -- # wait 89805 00:17:47.183 07:23:48 -- fips/fips.sh@17 -- # nvmftestfini 00:17:47.183 07:23:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:47.183 07:23:48 -- nvmf/common.sh@116 -- # sync 00:17:47.441 07:23:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:47.441 07:23:49 -- nvmf/common.sh@119 -- # set +e 00:17:47.441 07:23:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:47.441 07:23:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:47.441 rmmod nvme_tcp 00:17:47.441 rmmod nvme_fabrics 00:17:47.441 rmmod nvme_keyring 00:17:47.441 07:23:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:47.441 07:23:49 -- nvmf/common.sh@123 -- # set -e 00:17:47.441 07:23:49 -- nvmf/common.sh@124 -- # return 0 00:17:47.441 07:23:49 -- nvmf/common.sh@477 -- # '[' -n 89752 ']' 00:17:47.441 07:23:49 -- nvmf/common.sh@478 -- # killprocess 89752 00:17:47.441 07:23:49 -- common/autotest_common.sh@926 -- # '[' -z 89752 ']' 00:17:47.441 07:23:49 -- common/autotest_common.sh@930 -- # kill -0 89752 00:17:47.441 07:23:49 -- common/autotest_common.sh@931 -- # uname 00:17:47.441 07:23:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:47.441 07:23:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89752 00:17:47.441 07:23:49 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:47.441 07:23:49 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:47.441 killing process with pid 89752 00:17:47.441 07:23:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89752' 00:17:47.441 07:23:49 -- common/autotest_common.sh@945 -- # kill 89752 00:17:47.441 07:23:49 -- common/autotest_common.sh@950 -- # wait 89752 00:17:47.700 07:23:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:47.700 07:23:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:47.700 07:23:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:47.700 07:23:49 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:47.700 07:23:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:47.700 07:23:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.700 07:23:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:47.700 07:23:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.700 07:23:49 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:47.700 07:23:49 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:47.700 00:17:47.700 real 0m14.326s 00:17:47.700 user 0m18.488s 00:17:47.700 sys 0m6.391s 00:17:47.700 07:23:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:47.700 07:23:49 -- common/autotest_common.sh@10 -- # set +x 00:17:47.700 ************************************ 00:17:47.700 END TEST nvmf_fips 00:17:47.700 ************************************ 00:17:47.700 07:23:49 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:17:47.700 07:23:49 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:17:47.700 07:23:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:47.700 07:23:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:47.700 07:23:49 -- common/autotest_common.sh@10 -- # set +x 00:17:47.700 ************************************ 00:17:47.700 START TEST nvmf_fuzz 00:17:47.700 ************************************ 00:17:47.700 07:23:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:17:47.958 * Looking for test storage... 00:17:47.958 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:47.958 07:23:49 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:47.958 07:23:49 -- nvmf/common.sh@7 -- # uname -s 00:17:47.958 07:23:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:47.958 07:23:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:47.958 07:23:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:47.958 07:23:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:47.958 07:23:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:47.958 07:23:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:47.958 07:23:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:47.958 07:23:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:47.958 07:23:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:47.958 07:23:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:47.958 07:23:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:17:47.958 07:23:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:17:47.958 07:23:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:47.958 07:23:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:47.958 07:23:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:47.958 07:23:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:47.958 07:23:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:47.958 07:23:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:47.958 07:23:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:47.958 07:23:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.958 07:23:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.958 07:23:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.958 07:23:49 -- paths/export.sh@5 -- # export PATH 00:17:47.958 07:23:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.958 07:23:49 -- nvmf/common.sh@46 -- # : 0 00:17:47.958 07:23:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:47.958 07:23:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:47.958 07:23:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:47.958 07:23:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:47.958 07:23:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:47.958 07:23:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:47.958 07:23:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:47.958 07:23:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:47.958 07:23:49 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:17:47.958 07:23:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:47.958 07:23:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:47.958 07:23:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:47.958 07:23:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:47.958 07:23:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:47.958 07:23:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.958 07:23:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:47.959 07:23:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.959 07:23:49 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:47.959 07:23:49 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:47.959 07:23:49 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:47.959 07:23:49 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:47.959 07:23:49 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:47.959 07:23:49 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:47.959 07:23:49 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:47.959 07:23:49 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:47.959 07:23:49 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:47.959 07:23:49 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:47.959 07:23:49 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:47.959 07:23:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:47.959 07:23:49 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:47.959 07:23:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:47.959 07:23:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:47.959 07:23:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:47.959 07:23:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:47.959 07:23:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:47.959 07:23:49 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:47.959 07:23:49 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:47.959 Cannot find device "nvmf_tgt_br" 00:17:47.959 07:23:49 -- nvmf/common.sh@154 -- # true 00:17:47.959 07:23:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:47.959 Cannot find device "nvmf_tgt_br2" 00:17:47.959 07:23:49 -- nvmf/common.sh@155 -- # true 00:17:47.959 07:23:49 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:47.959 07:23:49 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:47.959 Cannot find device "nvmf_tgt_br" 00:17:47.959 07:23:49 -- nvmf/common.sh@157 -- # true 00:17:47.959 07:23:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:47.959 Cannot find device "nvmf_tgt_br2" 00:17:47.959 07:23:49 -- nvmf/common.sh@158 -- # true 00:17:47.959 07:23:49 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:47.959 07:23:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:47.959 07:23:49 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:47.959 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:47.959 07:23:49 -- nvmf/common.sh@161 -- # true 00:17:47.959 07:23:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:47.959 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:47.959 07:23:49 -- nvmf/common.sh@162 -- # true 00:17:47.959 07:23:49 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:47.959 07:23:49 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:47.959 07:23:49 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:47.959 07:23:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:47.959 07:23:49 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:47.959 07:23:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:47.959 07:23:49 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:47.959 07:23:49 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:48.218 07:23:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:48.218 07:23:49 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:48.218 07:23:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:48.218 07:23:49 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:48.218 07:23:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:48.218 07:23:49 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:48.218 07:23:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:48.218 07:23:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:48.218 07:23:49 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:48.218 07:23:49 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:48.218 07:23:49 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:48.218 07:23:49 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:48.218 07:23:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:48.218 07:23:49 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:48.218 07:23:49 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:48.218 07:23:49 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:48.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:48.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:17:48.218 00:17:48.218 --- 10.0.0.2 ping statistics --- 00:17:48.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.218 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:17:48.218 07:23:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:48.218 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:48.218 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:17:48.218 00:17:48.218 --- 10.0.0.3 ping statistics --- 00:17:48.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.218 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:17:48.218 07:23:49 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:48.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:48.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:17:48.218 00:17:48.218 --- 10.0.0.1 ping statistics --- 00:17:48.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.218 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:17:48.218 07:23:49 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:48.218 07:23:49 -- nvmf/common.sh@421 -- # return 0 00:17:48.218 07:23:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:48.218 07:23:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:48.218 07:23:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:48.218 07:23:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:48.218 07:23:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:48.218 07:23:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:48.218 07:23:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:48.218 07:23:49 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=90161 00:17:48.218 07:23:49 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:48.218 07:23:49 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:48.218 07:23:49 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 90161 00:17:48.218 07:23:49 -- common/autotest_common.sh@819 -- # '[' -z 90161 ']' 00:17:48.218 07:23:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.218 07:23:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:48.218 07:23:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.218 07:23:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:48.218 07:23:49 -- common/autotest_common.sh@10 -- # set +x 00:17:49.594 07:23:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:49.594 07:23:51 -- common/autotest_common.sh@852 -- # return 0 00:17:49.594 07:23:51 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:49.594 07:23:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:49.594 07:23:51 -- common/autotest_common.sh@10 -- # set +x 00:17:49.594 07:23:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:49.594 07:23:51 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:17:49.594 07:23:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:49.594 07:23:51 -- common/autotest_common.sh@10 -- # set +x 00:17:49.594 Malloc0 00:17:49.594 07:23:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:49.594 07:23:51 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:49.594 07:23:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:49.594 07:23:51 -- common/autotest_common.sh@10 -- # set +x 00:17:49.594 07:23:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:49.594 07:23:51 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:49.594 07:23:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:49.594 07:23:51 -- common/autotest_common.sh@10 -- # set +x 00:17:49.594 07:23:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:49.594 07:23:51 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:49.594 07:23:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:49.594 07:23:51 -- common/autotest_common.sh@10 -- # set +x 00:17:49.594 07:23:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:49.594 07:23:51 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:17:49.594 07:23:51 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:17:49.858 Shutting down the fuzz application 00:17:49.858 07:23:51 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:17:50.140 Shutting down the fuzz application 00:17:50.140 07:23:51 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:50.140 07:23:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:50.140 07:23:51 -- common/autotest_common.sh@10 -- # set +x 00:17:50.140 07:23:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:50.140 07:23:51 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:17:50.140 07:23:51 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:17:50.140 07:23:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:50.140 07:23:51 -- nvmf/common.sh@116 -- # sync 00:17:50.140 07:23:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:50.140 07:23:51 -- nvmf/common.sh@119 -- # set +e 00:17:50.140 07:23:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:50.140 07:23:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:50.140 rmmod nvme_tcp 00:17:50.140 rmmod nvme_fabrics 00:17:50.140 rmmod nvme_keyring 00:17:50.140 07:23:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:50.140 07:23:51 -- nvmf/common.sh@123 -- # set -e 00:17:50.140 07:23:51 -- nvmf/common.sh@124 -- # return 0 00:17:50.140 07:23:51 -- nvmf/common.sh@477 -- # '[' -n 90161 ']' 00:17:50.140 07:23:51 -- nvmf/common.sh@478 -- # killprocess 90161 00:17:50.140 07:23:51 -- common/autotest_common.sh@926 -- # '[' -z 90161 ']' 00:17:50.140 07:23:51 -- common/autotest_common.sh@930 -- # kill -0 90161 00:17:50.140 07:23:51 -- common/autotest_common.sh@931 -- # uname 00:17:50.140 07:23:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:50.140 07:23:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 90161 00:17:50.140 07:23:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:50.140 07:23:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:50.140 07:23:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 90161' 00:17:50.140 killing process with pid 90161 00:17:50.140 07:23:51 -- common/autotest_common.sh@945 -- # kill 90161 00:17:50.140 07:23:51 -- common/autotest_common.sh@950 -- # wait 90161 00:17:50.411 07:23:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:50.411 07:23:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:50.411 07:23:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:50.411 07:23:52 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:50.411 07:23:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:50.411 07:23:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.411 07:23:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:50.411 07:23:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.411 07:23:52 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:50.411 07:23:52 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:17:50.411 ************************************ 00:17:50.411 END TEST nvmf_fuzz 00:17:50.411 ************************************ 00:17:50.411 00:17:50.411 real 0m2.696s 00:17:50.411 user 0m2.802s 00:17:50.411 sys 0m0.673s 00:17:50.411 07:23:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:50.411 07:23:52 -- common/autotest_common.sh@10 -- # set +x 00:17:50.411 07:23:52 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:17:50.411 07:23:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:50.411 07:23:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:50.411 07:23:52 -- common/autotest_common.sh@10 -- # set +x 00:17:50.411 ************************************ 00:17:50.411 START TEST nvmf_multiconnection 00:17:50.411 ************************************ 00:17:50.411 07:23:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:17:50.670 * Looking for test storage... 00:17:50.670 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:50.670 07:23:52 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:50.670 07:23:52 -- nvmf/common.sh@7 -- # uname -s 00:17:50.670 07:23:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:50.670 07:23:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:50.670 07:23:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:50.670 07:23:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:50.670 07:23:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:50.670 07:23:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:50.670 07:23:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:50.670 07:23:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:50.670 07:23:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:50.670 07:23:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:50.670 07:23:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:17:50.670 07:23:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:17:50.670 07:23:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:50.670 07:23:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:50.670 07:23:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:50.670 07:23:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:50.670 07:23:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:50.670 07:23:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:50.670 07:23:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:50.670 07:23:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.670 07:23:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.670 07:23:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.670 07:23:52 -- paths/export.sh@5 -- # export PATH 00:17:50.670 07:23:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.670 07:23:52 -- nvmf/common.sh@46 -- # : 0 00:17:50.670 07:23:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:50.670 07:23:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:50.670 07:23:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:50.670 07:23:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:50.670 07:23:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:50.670 07:23:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:50.670 07:23:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:50.670 07:23:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:50.670 07:23:52 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:50.670 07:23:52 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:50.670 07:23:52 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:17:50.670 07:23:52 -- target/multiconnection.sh@16 -- # nvmftestinit 00:17:50.670 07:23:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:50.670 07:23:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:50.670 07:23:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:50.670 07:23:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:50.670 07:23:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:50.670 07:23:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.670 07:23:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:50.670 07:23:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.670 07:23:52 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:50.670 07:23:52 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:50.670 07:23:52 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:50.670 07:23:52 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:50.670 07:23:52 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:50.670 07:23:52 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:50.670 07:23:52 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:50.670 07:23:52 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:50.670 07:23:52 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:50.670 07:23:52 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:50.670 07:23:52 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:50.670 07:23:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:50.670 07:23:52 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:50.670 07:23:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:50.670 07:23:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:50.670 07:23:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:50.670 07:23:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:50.670 07:23:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:50.670 07:23:52 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:50.670 07:23:52 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:50.670 Cannot find device "nvmf_tgt_br" 00:17:50.670 07:23:52 -- nvmf/common.sh@154 -- # true 00:17:50.670 07:23:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:50.670 Cannot find device "nvmf_tgt_br2" 00:17:50.670 07:23:52 -- nvmf/common.sh@155 -- # true 00:17:50.670 07:23:52 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:50.670 07:23:52 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:50.670 Cannot find device "nvmf_tgt_br" 00:17:50.670 07:23:52 -- nvmf/common.sh@157 -- # true 00:17:50.670 07:23:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:50.670 Cannot find device "nvmf_tgt_br2" 00:17:50.670 07:23:52 -- nvmf/common.sh@158 -- # true 00:17:50.670 07:23:52 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:50.670 07:23:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:50.670 07:23:52 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:50.670 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:50.670 07:23:52 -- nvmf/common.sh@161 -- # true 00:17:50.670 07:23:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:50.670 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:50.671 07:23:52 -- nvmf/common.sh@162 -- # true 00:17:50.671 07:23:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:50.671 07:23:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:50.671 07:23:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:50.671 07:23:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:50.671 07:23:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:50.929 07:23:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:50.929 07:23:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:50.930 07:23:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:50.930 07:23:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:50.930 07:23:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:50.930 07:23:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:50.930 07:23:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:50.930 07:23:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:50.930 07:23:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:50.930 07:23:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:50.930 07:23:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:50.930 07:23:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:50.930 07:23:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:50.930 07:23:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:50.930 07:23:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:50.930 07:23:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:50.930 07:23:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:50.930 07:23:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:50.930 07:23:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:50.930 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:50.930 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:17:50.930 00:17:50.930 --- 10.0.0.2 ping statistics --- 00:17:50.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.930 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:17:50.930 07:23:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:50.930 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:50.930 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:17:50.930 00:17:50.930 --- 10.0.0.3 ping statistics --- 00:17:50.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.930 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:17:50.930 07:23:52 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:50.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:50.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:17:50.930 00:17:50.930 --- 10.0.0.1 ping statistics --- 00:17:50.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.930 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:17:50.930 07:23:52 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:50.930 07:23:52 -- nvmf/common.sh@421 -- # return 0 00:17:50.930 07:23:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:50.930 07:23:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:50.930 07:23:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:50.930 07:23:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:50.930 07:23:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:50.930 07:23:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:50.930 07:23:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:50.930 07:23:52 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:17:50.930 07:23:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:50.930 07:23:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:50.930 07:23:52 -- common/autotest_common.sh@10 -- # set +x 00:17:50.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.930 07:23:52 -- nvmf/common.sh@469 -- # nvmfpid=90363 00:17:50.930 07:23:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:50.930 07:23:52 -- nvmf/common.sh@470 -- # waitforlisten 90363 00:17:50.930 07:23:52 -- common/autotest_common.sh@819 -- # '[' -z 90363 ']' 00:17:50.930 07:23:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.930 07:23:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:50.930 07:23:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.930 07:23:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:50.930 07:23:52 -- common/autotest_common.sh@10 -- # set +x 00:17:50.930 [2024-11-04 07:23:52.739417] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:50.930 [2024-11-04 07:23:52.739668] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.188 [2024-11-04 07:23:52.880249] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:51.189 [2024-11-04 07:23:52.941078] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:51.189 [2024-11-04 07:23:52.941505] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.189 [2024-11-04 07:23:52.941823] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.189 [2024-11-04 07:23:52.942186] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.189 [2024-11-04 07:23:52.942577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.189 [2024-11-04 07:23:52.942734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:51.189 [2024-11-04 07:23:52.942829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.189 [2024-11-04 07:23:52.942829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:52.123 07:23:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:52.123 07:23:53 -- common/autotest_common.sh@852 -- # return 0 00:17:52.123 07:23:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:52.123 07:23:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:52.123 07:23:53 -- common/autotest_common.sh@10 -- # set +x 00:17:52.123 07:23:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:52.123 07:23:53 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:52.123 07:23:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.123 07:23:53 -- common/autotest_common.sh@10 -- # set +x 00:17:52.123 [2024-11-04 07:23:53.800298] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:52.123 07:23:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.123 07:23:53 -- target/multiconnection.sh@21 -- # seq 1 11 00:17:52.123 07:23:53 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:52.123 07:23:53 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:52.123 07:23:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.123 07:23:53 -- common/autotest_common.sh@10 -- # set +x 00:17:52.123 Malloc1 00:17:52.123 07:23:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.123 07:23:53 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:17:52.123 07:23:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.123 07:23:53 -- common/autotest_common.sh@10 -- # set +x 00:17:52.123 07:23:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.123 07:23:53 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:52.123 07:23:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.123 07:23:53 -- common/autotest_common.sh@10 -- # set +x 00:17:52.123 07:23:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.123 07:23:53 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:52.123 07:23:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.123 07:23:53 -- common/autotest_common.sh@10 -- # set +x 00:17:52.123 [2024-11-04 07:23:53.878799] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:52.123 07:23:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.123 07:23:53 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:52.123 07:23:53 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:17:52.123 07:23:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.123 07:23:53 -- common/autotest_common.sh@10 -- # set +x 00:17:52.123 Malloc2 00:17:52.123 07:23:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.123 07:23:53 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:52.123 07:23:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.123 07:23:53 -- common/autotest_common.sh@10 -- # set +x 00:17:52.123 07:23:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.123 07:23:53 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:17:52.123 07:23:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.123 07:23:53 -- common/autotest_common.sh@10 -- # set +x 00:17:52.123 07:23:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.123 07:23:53 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:52.123 07:23:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.123 07:23:53 -- common/autotest_common.sh@10 -- # set +x 00:17:52.123 07:23:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.123 07:23:53 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:52.123 07:23:53 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:17:52.123 07:23:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.123 07:23:53 -- common/autotest_common.sh@10 -- # set +x 00:17:52.381 Malloc3 00:17:52.381 07:23:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.381 07:23:53 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:17:52.381 07:23:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.381 07:23:53 -- common/autotest_common.sh@10 -- # set +x 00:17:52.381 07:23:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.381 07:23:53 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:17:52.381 07:23:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.381 07:23:53 -- common/autotest_common.sh@10 -- # set +x 00:17:52.381 07:23:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.381 07:23:53 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:17:52.381 07:23:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.381 07:23:53 -- common/autotest_common.sh@10 -- # set +x 00:17:52.381 07:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.381 07:23:54 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:52.381 07:23:54 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:17:52.381 07:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.381 07:23:54 -- common/autotest_common.sh@10 -- # set +x 00:17:52.381 Malloc4 00:17:52.381 07:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.381 07:23:54 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:17:52.381 07:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.381 07:23:54 -- common/autotest_common.sh@10 -- # set +x 00:17:52.381 07:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.381 07:23:54 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:17:52.381 07:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.381 07:23:54 -- common/autotest_common.sh@10 -- # set +x 00:17:52.381 07:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.381 07:23:54 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:17:52.381 07:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.381 07:23:54 -- common/autotest_common.sh@10 -- # set +x 00:17:52.381 07:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.381 07:23:54 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:52.381 07:23:54 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:17:52.381 07:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.381 07:23:54 -- common/autotest_common.sh@10 -- # set +x 00:17:52.381 Malloc5 00:17:52.381 07:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.381 07:23:54 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:17:52.382 07:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.382 07:23:54 -- common/autotest_common.sh@10 -- # set +x 00:17:52.382 07:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.382 07:23:54 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:17:52.382 07:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.382 07:23:54 -- common/autotest_common.sh@10 -- # set +x 00:17:52.382 07:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.382 07:23:54 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:17:52.382 07:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.382 07:23:54 -- common/autotest_common.sh@10 -- # set +x 00:17:52.382 07:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.382 07:23:54 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:52.382 07:23:54 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:17:52.382 07:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.382 07:23:54 -- common/autotest_common.sh@10 -- # set +x 00:17:52.382 Malloc6 00:17:52.382 07:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.382 07:23:54 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:17:52.382 07:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.382 07:23:54 -- common/autotest_common.sh@10 -- # set +x 00:17:52.382 07:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.382 07:23:54 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:17:52.382 07:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.382 07:23:54 -- common/autotest_common.sh@10 -- # set +x 00:17:52.382 07:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.382 07:23:54 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:17:52.382 07:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.382 07:23:54 -- common/autotest_common.sh@10 -- # set +x 00:17:52.382 07:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.382 07:23:54 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:52.382 07:23:54 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:17:52.382 07:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.382 07:23:54 -- common/autotest_common.sh@10 -- # set +x 00:17:52.382 Malloc7 00:17:52.382 07:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.382 07:23:54 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:17:52.382 07:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.382 07:23:54 -- common/autotest_common.sh@10 -- # set +x 00:17:52.640 07:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.640 07:23:54 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:17:52.640 07:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.640 07:23:54 -- common/autotest_common.sh@10 -- # set +x 00:17:52.640 07:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.640 07:23:54 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:17:52.640 07:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.640 07:23:54 -- common/autotest_common.sh@10 -- # set +x 00:17:52.640 07:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.640 07:23:54 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:52.640 07:23:54 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:17:52.640 07:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.640 07:23:54 -- common/autotest_common.sh@10 -- # set +x 00:17:52.640 Malloc8 00:17:52.640 07:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.640 07:23:54 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:17:52.640 07:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.640 07:23:54 -- common/autotest_common.sh@10 -- # set +x 00:17:52.640 07:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.640 07:23:54 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:17:52.640 07:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.640 07:23:54 -- common/autotest_common.sh@10 -- # set +x 00:17:52.640 07:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.640 07:23:54 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:17:52.640 07:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.640 07:23:54 -- common/autotest_common.sh@10 -- # set +x 00:17:52.640 07:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.640 07:23:54 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:52.640 07:23:54 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:17:52.640 07:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.640 07:23:54 -- common/autotest_common.sh@10 -- # set +x 00:17:52.640 Malloc9 00:17:52.640 07:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.640 07:23:54 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:17:52.640 07:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.640 07:23:54 -- common/autotest_common.sh@10 -- # set +x 00:17:52.640 07:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.640 07:23:54 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:17:52.640 07:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.640 07:23:54 -- common/autotest_common.sh@10 -- # set +x 00:17:52.640 07:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.640 07:23:54 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:17:52.640 07:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.640 07:23:54 -- common/autotest_common.sh@10 -- # set +x 00:17:52.640 07:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.640 07:23:54 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:52.640 07:23:54 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:17:52.640 07:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.640 07:23:54 -- common/autotest_common.sh@10 -- # set +x 00:17:52.640 Malloc10 00:17:52.640 07:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.640 07:23:54 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:17:52.640 07:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.640 07:23:54 -- common/autotest_common.sh@10 -- # set +x 00:17:52.640 07:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.640 07:23:54 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:17:52.640 07:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.640 07:23:54 -- common/autotest_common.sh@10 -- # set +x 00:17:52.640 07:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.640 07:23:54 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:17:52.640 07:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.640 07:23:54 -- common/autotest_common.sh@10 -- # set +x 00:17:52.640 07:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.640 07:23:54 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:52.640 07:23:54 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:17:52.640 07:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.640 07:23:54 -- common/autotest_common.sh@10 -- # set +x 00:17:52.640 Malloc11 00:17:52.640 07:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.640 07:23:54 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:17:52.640 07:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.640 07:23:54 -- common/autotest_common.sh@10 -- # set +x 00:17:52.640 07:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.640 07:23:54 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:17:52.640 07:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.640 07:23:54 -- common/autotest_common.sh@10 -- # set +x 00:17:52.640 07:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.640 07:23:54 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:17:52.640 07:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.640 07:23:54 -- common/autotest_common.sh@10 -- # set +x 00:17:52.899 07:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.899 07:23:54 -- target/multiconnection.sh@28 -- # seq 1 11 00:17:52.899 07:23:54 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:52.899 07:23:54 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:52.899 07:23:54 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:17:52.899 07:23:54 -- common/autotest_common.sh@1177 -- # local i=0 00:17:52.899 07:23:54 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:17:52.899 07:23:54 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:17:52.899 07:23:54 -- common/autotest_common.sh@1184 -- # sleep 2 00:17:55.429 07:23:56 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:17:55.429 07:23:56 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:17:55.429 07:23:56 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:17:55.429 07:23:56 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:17:55.429 07:23:56 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:17:55.429 07:23:56 -- common/autotest_common.sh@1187 -- # return 0 00:17:55.429 07:23:56 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:55.429 07:23:56 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:17:55.429 07:23:56 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:17:55.429 07:23:56 -- common/autotest_common.sh@1177 -- # local i=0 00:17:55.429 07:23:56 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:17:55.429 07:23:56 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:17:55.429 07:23:56 -- common/autotest_common.sh@1184 -- # sleep 2 00:17:57.328 07:23:58 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:17:57.328 07:23:58 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:17:57.328 07:23:58 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:17:57.328 07:23:58 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:17:57.328 07:23:58 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:17:57.328 07:23:58 -- common/autotest_common.sh@1187 -- # return 0 00:17:57.328 07:23:58 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:57.328 07:23:58 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:17:57.328 07:23:59 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:17:57.328 07:23:59 -- common/autotest_common.sh@1177 -- # local i=0 00:17:57.328 07:23:59 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:17:57.328 07:23:59 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:17:57.328 07:23:59 -- common/autotest_common.sh@1184 -- # sleep 2 00:17:59.237 07:24:01 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:17:59.238 07:24:01 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:17:59.238 07:24:01 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:17:59.496 07:24:01 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:17:59.496 07:24:01 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:17:59.496 07:24:01 -- common/autotest_common.sh@1187 -- # return 0 00:17:59.496 07:24:01 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:59.496 07:24:01 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:17:59.496 07:24:01 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:17:59.496 07:24:01 -- common/autotest_common.sh@1177 -- # local i=0 00:17:59.496 07:24:01 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:17:59.496 07:24:01 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:17:59.496 07:24:01 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:02.031 07:24:03 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:02.031 07:24:03 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:02.031 07:24:03 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:18:02.031 07:24:03 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:02.031 07:24:03 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:02.031 07:24:03 -- common/autotest_common.sh@1187 -- # return 0 00:18:02.031 07:24:03 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:02.031 07:24:03 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:18:02.031 07:24:03 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:18:02.031 07:24:03 -- common/autotest_common.sh@1177 -- # local i=0 00:18:02.031 07:24:03 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:02.031 07:24:03 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:02.031 07:24:03 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:03.931 07:24:05 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:03.931 07:24:05 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:03.931 07:24:05 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:18:03.931 07:24:05 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:03.931 07:24:05 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:03.931 07:24:05 -- common/autotest_common.sh@1187 -- # return 0 00:18:03.931 07:24:05 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:03.931 07:24:05 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:18:03.931 07:24:05 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:18:03.931 07:24:05 -- common/autotest_common.sh@1177 -- # local i=0 00:18:03.931 07:24:05 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:03.931 07:24:05 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:03.931 07:24:05 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:06.461 07:24:07 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:06.461 07:24:07 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:06.461 07:24:07 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:18:06.461 07:24:07 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:06.461 07:24:07 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:06.461 07:24:07 -- common/autotest_common.sh@1187 -- # return 0 00:18:06.461 07:24:07 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:06.461 07:24:07 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:18:06.461 07:24:07 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:18:06.461 07:24:07 -- common/autotest_common.sh@1177 -- # local i=0 00:18:06.461 07:24:07 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:06.461 07:24:07 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:06.461 07:24:07 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:08.363 07:24:09 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:08.363 07:24:09 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:08.363 07:24:09 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:18:08.363 07:24:09 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:08.363 07:24:09 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:08.363 07:24:09 -- common/autotest_common.sh@1187 -- # return 0 00:18:08.363 07:24:09 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:08.363 07:24:09 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:18:08.363 07:24:10 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:18:08.363 07:24:10 -- common/autotest_common.sh@1177 -- # local i=0 00:18:08.363 07:24:10 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:08.363 07:24:10 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:08.363 07:24:10 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:10.263 07:24:12 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:10.263 07:24:12 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:10.263 07:24:12 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:18:10.263 07:24:12 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:10.263 07:24:12 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:10.263 07:24:12 -- common/autotest_common.sh@1187 -- # return 0 00:18:10.263 07:24:12 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:10.263 07:24:12 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:18:10.521 07:24:12 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:18:10.521 07:24:12 -- common/autotest_common.sh@1177 -- # local i=0 00:18:10.521 07:24:12 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:10.521 07:24:12 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:10.521 07:24:12 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:13.050 07:24:14 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:13.050 07:24:14 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:13.050 07:24:14 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:18:13.050 07:24:14 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:13.050 07:24:14 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:13.050 07:24:14 -- common/autotest_common.sh@1187 -- # return 0 00:18:13.050 07:24:14 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:13.050 07:24:14 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:18:13.050 07:24:14 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:18:13.050 07:24:14 -- common/autotest_common.sh@1177 -- # local i=0 00:18:13.050 07:24:14 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:13.050 07:24:14 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:13.050 07:24:14 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:14.991 07:24:16 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:14.991 07:24:16 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:14.991 07:24:16 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:18:14.991 07:24:16 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:14.991 07:24:16 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:14.991 07:24:16 -- common/autotest_common.sh@1187 -- # return 0 00:18:14.991 07:24:16 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:14.991 07:24:16 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:18:14.991 07:24:16 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:18:14.991 07:24:16 -- common/autotest_common.sh@1177 -- # local i=0 00:18:14.991 07:24:16 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:14.991 07:24:16 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:14.991 07:24:16 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:16.892 07:24:18 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:16.892 07:24:18 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:16.892 07:24:18 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:18:17.151 07:24:18 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:17.151 07:24:18 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:17.151 07:24:18 -- common/autotest_common.sh@1187 -- # return 0 00:18:17.151 07:24:18 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:18:17.151 [global] 00:18:17.151 thread=1 00:18:17.151 invalidate=1 00:18:17.151 rw=read 00:18:17.151 time_based=1 00:18:17.151 runtime=10 00:18:17.151 ioengine=libaio 00:18:17.151 direct=1 00:18:17.151 bs=262144 00:18:17.151 iodepth=64 00:18:17.151 norandommap=1 00:18:17.151 numjobs=1 00:18:17.151 00:18:17.151 [job0] 00:18:17.151 filename=/dev/nvme0n1 00:18:17.151 [job1] 00:18:17.151 filename=/dev/nvme10n1 00:18:17.151 [job2] 00:18:17.151 filename=/dev/nvme1n1 00:18:17.151 [job3] 00:18:17.151 filename=/dev/nvme2n1 00:18:17.151 [job4] 00:18:17.151 filename=/dev/nvme3n1 00:18:17.151 [job5] 00:18:17.151 filename=/dev/nvme4n1 00:18:17.151 [job6] 00:18:17.151 filename=/dev/nvme5n1 00:18:17.151 [job7] 00:18:17.151 filename=/dev/nvme6n1 00:18:17.151 [job8] 00:18:17.151 filename=/dev/nvme7n1 00:18:17.151 [job9] 00:18:17.151 filename=/dev/nvme8n1 00:18:17.151 [job10] 00:18:17.151 filename=/dev/nvme9n1 00:18:17.151 Could not set queue depth (nvme0n1) 00:18:17.151 Could not set queue depth (nvme10n1) 00:18:17.151 Could not set queue depth (nvme1n1) 00:18:17.151 Could not set queue depth (nvme2n1) 00:18:17.151 Could not set queue depth (nvme3n1) 00:18:17.151 Could not set queue depth (nvme4n1) 00:18:17.151 Could not set queue depth (nvme5n1) 00:18:17.151 Could not set queue depth (nvme6n1) 00:18:17.151 Could not set queue depth (nvme7n1) 00:18:17.151 Could not set queue depth (nvme8n1) 00:18:17.151 Could not set queue depth (nvme9n1) 00:18:17.409 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:17.410 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:17.410 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:17.410 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:17.410 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:17.410 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:17.410 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:17.410 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:17.410 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:17.410 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:17.410 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:17.410 fio-3.35 00:18:17.410 Starting 11 threads 00:18:29.619 00:18:29.619 job0: (groupid=0, jobs=1): err= 0: pid=90847: Mon Nov 4 07:24:29 2024 00:18:29.619 read: IOPS=593, BW=148MiB/s (156MB/s)(1500MiB/10106msec) 00:18:29.619 slat (usec): min=13, max=103082, avg=1632.64, stdev=5858.56 00:18:29.619 clat (msec): min=7, max=272, avg=105.98, stdev=22.74 00:18:29.619 lat (msec): min=15, max=272, avg=107.61, stdev=23.39 00:18:29.619 clat percentiles (msec): 00:18:29.619 | 1.00th=[ 31], 5.00th=[ 79], 10.00th=[ 89], 20.00th=[ 95], 00:18:29.619 | 30.00th=[ 100], 40.00th=[ 104], 50.00th=[ 107], 60.00th=[ 110], 00:18:29.619 | 70.00th=[ 113], 80.00th=[ 117], 90.00th=[ 124], 95.00th=[ 132], 00:18:29.619 | 99.00th=[ 199], 99.50th=[ 224], 99.90th=[ 230], 99.95th=[ 230], 00:18:29.619 | 99.99th=[ 271] 00:18:29.619 bw ( KiB/s): min=102092, max=232960, per=9.28%, avg=151998.35, stdev=22802.82, samples=20 00:18:29.619 iops : min= 398, max= 910, avg=593.60, stdev=89.15, samples=20 00:18:29.619 lat (msec) : 10=0.02%, 20=0.07%, 50=3.05%, 100=28.53%, 250=68.32% 00:18:29.619 lat (msec) : 500=0.02% 00:18:29.619 cpu : usr=0.22%, sys=1.82%, ctx=1250, majf=0, minf=4097 00:18:29.619 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:18:29.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:29.619 issued rwts: total=6000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.619 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:29.619 job1: (groupid=0, jobs=1): err= 0: pid=90848: Mon Nov 4 07:24:29 2024 00:18:29.619 read: IOPS=719, BW=180MiB/s (189MB/s)(1818MiB/10103msec) 00:18:29.619 slat (usec): min=22, max=156009, avg=1361.19, stdev=5213.91 00:18:29.619 clat (msec): min=45, max=241, avg=87.43, stdev=28.32 00:18:29.619 lat (msec): min=45, max=319, avg=88.79, stdev=28.96 00:18:29.619 clat percentiles (msec): 00:18:29.619 | 1.00th=[ 53], 5.00th=[ 61], 10.00th=[ 64], 20.00th=[ 69], 00:18:29.619 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 79], 60.00th=[ 83], 00:18:29.619 | 70.00th=[ 86], 80.00th=[ 97], 90.00th=[ 136], 95.00th=[ 148], 00:18:29.619 | 99.00th=[ 182], 99.50th=[ 194], 99.90th=[ 211], 99.95th=[ 243], 00:18:29.619 | 99.99th=[ 243] 00:18:29.619 bw ( KiB/s): min=83289, max=233005, per=11.27%, avg=184493.40, stdev=47234.85, samples=20 00:18:29.619 iops : min= 325, max= 910, avg=720.60, stdev=184.52, samples=20 00:18:29.619 lat (msec) : 50=0.38%, 100=81.14%, 250=18.48% 00:18:29.619 cpu : usr=0.25%, sys=2.22%, ctx=1467, majf=0, minf=4097 00:18:29.619 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:18:29.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:29.619 issued rwts: total=7273,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.619 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:29.619 job2: (groupid=0, jobs=1): err= 0: pid=90849: Mon Nov 4 07:24:29 2024 00:18:29.619 read: IOPS=731, BW=183MiB/s (192MB/s)(1848MiB/10106msec) 00:18:29.619 slat (usec): min=16, max=75959, avg=1334.70, stdev=4582.84 00:18:29.619 clat (msec): min=19, max=277, avg=86.05, stdev=26.33 00:18:29.619 lat (msec): min=19, max=277, avg=87.38, stdev=26.89 00:18:29.619 clat percentiles (msec): 00:18:29.619 | 1.00th=[ 53], 5.00th=[ 62], 10.00th=[ 66], 20.00th=[ 70], 00:18:29.619 | 30.00th=[ 73], 40.00th=[ 77], 50.00th=[ 79], 60.00th=[ 82], 00:18:29.619 | 70.00th=[ 87], 80.00th=[ 93], 90.00th=[ 132], 95.00th=[ 142], 00:18:29.619 | 99.00th=[ 167], 99.50th=[ 218], 99.90th=[ 253], 99.95th=[ 253], 00:18:29.619 | 99.99th=[ 279] 00:18:29.619 bw ( KiB/s): min=100864, max=225792, per=11.45%, avg=187464.90, stdev=41525.85, samples=20 00:18:29.619 iops : min= 394, max= 882, avg=732.20, stdev=162.18, samples=20 00:18:29.619 lat (msec) : 20=0.03%, 50=0.57%, 100=82.84%, 250=16.38%, 500=0.18% 00:18:29.619 cpu : usr=0.24%, sys=2.31%, ctx=1426, majf=0, minf=4097 00:18:29.619 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:18:29.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:29.619 issued rwts: total=7391,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.619 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:29.619 job3: (groupid=0, jobs=1): err= 0: pid=90850: Mon Nov 4 07:24:29 2024 00:18:29.619 read: IOPS=434, BW=109MiB/s (114MB/s)(1098MiB/10098msec) 00:18:29.619 slat (usec): min=19, max=86285, avg=2228.07, stdev=7975.36 00:18:29.619 clat (msec): min=57, max=264, avg=144.74, stdev=22.13 00:18:29.619 lat (msec): min=58, max=264, avg=146.97, stdev=23.33 00:18:29.619 clat percentiles (msec): 00:18:29.619 | 1.00th=[ 79], 5.00th=[ 96], 10.00th=[ 123], 20.00th=[ 132], 00:18:29.619 | 30.00th=[ 140], 40.00th=[ 144], 50.00th=[ 148], 60.00th=[ 150], 00:18:29.619 | 70.00th=[ 155], 80.00th=[ 159], 90.00th=[ 167], 95.00th=[ 171], 00:18:29.619 | 99.00th=[ 205], 99.50th=[ 222], 99.90th=[ 230], 99.95th=[ 266], 00:18:29.619 | 99.99th=[ 266] 00:18:29.619 bw ( KiB/s): min=95232, max=150016, per=6.76%, avg=110747.15, stdev=13108.54, samples=20 00:18:29.619 iops : min= 372, max= 586, avg=432.50, stdev=51.17, samples=20 00:18:29.619 lat (msec) : 100=5.97%, 250=93.96%, 500=0.07% 00:18:29.619 cpu : usr=0.19%, sys=1.43%, ctx=951, majf=0, minf=4097 00:18:29.619 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:18:29.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:29.619 issued rwts: total=4390,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.619 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:29.619 job4: (groupid=0, jobs=1): err= 0: pid=90851: Mon Nov 4 07:24:29 2024 00:18:29.620 read: IOPS=583, BW=146MiB/s (153MB/s)(1476MiB/10113msec) 00:18:29.620 slat (usec): min=13, max=93393, avg=1583.22, stdev=5797.18 00:18:29.620 clat (msec): min=12, max=264, avg=107.84, stdev=33.37 00:18:29.620 lat (msec): min=13, max=264, avg=109.42, stdev=34.07 00:18:29.620 clat percentiles (msec): 00:18:29.620 | 1.00th=[ 21], 5.00th=[ 32], 10.00th=[ 44], 20.00th=[ 99], 00:18:29.620 | 30.00th=[ 104], 40.00th=[ 108], 50.00th=[ 112], 60.00th=[ 115], 00:18:29.620 | 70.00th=[ 121], 80.00th=[ 127], 90.00th=[ 136], 95.00th=[ 148], 00:18:29.620 | 99.00th=[ 211], 99.50th=[ 241], 99.90th=[ 253], 99.95th=[ 259], 00:18:29.620 | 99.99th=[ 264] 00:18:29.620 bw ( KiB/s): min=105984, max=347136, per=9.13%, avg=149421.15, stdev=48377.19, samples=20 00:18:29.620 iops : min= 414, max= 1356, avg=583.55, stdev=188.98, samples=20 00:18:29.620 lat (msec) : 20=0.95%, 50=9.77%, 100=13.65%, 250=75.41%, 500=0.22% 00:18:29.620 cpu : usr=0.20%, sys=1.92%, ctx=1223, majf=0, minf=4097 00:18:29.620 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:18:29.620 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.620 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:29.620 issued rwts: total=5904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.620 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:29.620 job5: (groupid=0, jobs=1): err= 0: pid=90852: Mon Nov 4 07:24:29 2024 00:18:29.620 read: IOPS=478, BW=120MiB/s (125MB/s)(1206MiB/10074msec) 00:18:29.620 slat (usec): min=14, max=98849, avg=2050.51, stdev=7159.12 00:18:29.620 clat (msec): min=11, max=239, avg=131.36, stdev=32.96 00:18:29.620 lat (msec): min=11, max=273, avg=133.41, stdev=33.93 00:18:29.620 clat percentiles (msec): 00:18:29.620 | 1.00th=[ 29], 5.00th=[ 74], 10.00th=[ 85], 20.00th=[ 103], 00:18:29.620 | 30.00th=[ 112], 40.00th=[ 132], 50.00th=[ 142], 60.00th=[ 148], 00:18:29.620 | 70.00th=[ 153], 80.00th=[ 159], 90.00th=[ 165], 95.00th=[ 171], 00:18:29.620 | 99.00th=[ 188], 99.50th=[ 211], 99.90th=[ 220], 99.95th=[ 220], 00:18:29.620 | 99.99th=[ 241] 00:18:29.620 bw ( KiB/s): min=89600, max=210432, per=7.43%, avg=121699.05, stdev=29555.00, samples=20 00:18:29.620 iops : min= 350, max= 822, avg=475.35, stdev=115.47, samples=20 00:18:29.620 lat (msec) : 20=0.19%, 50=1.78%, 100=15.84%, 250=82.19% 00:18:29.620 cpu : usr=0.27%, sys=1.88%, ctx=883, majf=0, minf=4097 00:18:29.620 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:29.620 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.620 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:29.620 issued rwts: total=4822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.620 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:29.620 job6: (groupid=0, jobs=1): err= 0: pid=90853: Mon Nov 4 07:24:29 2024 00:18:29.620 read: IOPS=415, BW=104MiB/s (109MB/s)(1050MiB/10100msec) 00:18:29.620 slat (usec): min=22, max=107984, avg=2332.50, stdev=7844.92 00:18:29.620 clat (msec): min=76, max=257, avg=151.27, stdev=18.80 00:18:29.620 lat (msec): min=76, max=280, avg=153.61, stdev=20.26 00:18:29.620 clat percentiles (msec): 00:18:29.620 | 1.00th=[ 91], 5.00th=[ 126], 10.00th=[ 133], 20.00th=[ 140], 00:18:29.620 | 30.00th=[ 144], 40.00th=[ 148], 50.00th=[ 150], 60.00th=[ 155], 00:18:29.620 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 171], 95.00th=[ 182], 00:18:29.620 | 99.00th=[ 220], 99.50th=[ 224], 99.90th=[ 243], 99.95th=[ 249], 00:18:29.620 | 99.99th=[ 257] 00:18:29.620 bw ( KiB/s): min=71680, max=120320, per=6.47%, avg=105901.10, stdev=10193.80, samples=20 00:18:29.620 iops : min= 280, max= 470, avg=413.60, stdev=39.78, samples=20 00:18:29.620 lat (msec) : 100=1.19%, 250=98.79%, 500=0.02% 00:18:29.620 cpu : usr=0.17%, sys=1.59%, ctx=758, majf=0, minf=4097 00:18:29.620 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:18:29.620 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.620 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:29.620 issued rwts: total=4201,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.620 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:29.620 job7: (groupid=0, jobs=1): err= 0: pid=90854: Mon Nov 4 07:24:29 2024 00:18:29.620 read: IOPS=419, BW=105MiB/s (110MB/s)(1061MiB/10113msec) 00:18:29.620 slat (usec): min=17, max=116724, avg=2311.98, stdev=8176.58 00:18:29.620 clat (msec): min=19, max=298, avg=150.04, stdev=23.15 00:18:29.620 lat (msec): min=20, max=298, avg=152.35, stdev=24.57 00:18:29.620 clat percentiles (msec): 00:18:29.620 | 1.00th=[ 78], 5.00th=[ 118], 10.00th=[ 128], 20.00th=[ 136], 00:18:29.620 | 30.00th=[ 142], 40.00th=[ 146], 50.00th=[ 153], 60.00th=[ 155], 00:18:29.620 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 176], 95.00th=[ 184], 00:18:29.620 | 99.00th=[ 215], 99.50th=[ 222], 99.90th=[ 243], 99.95th=[ 245], 00:18:29.620 | 99.99th=[ 300] 00:18:29.620 bw ( KiB/s): min=85504, max=128512, per=6.53%, avg=106875.50, stdev=12375.66, samples=20 00:18:29.620 iops : min= 334, max= 502, avg=417.40, stdev=48.39, samples=20 00:18:29.620 lat (msec) : 20=0.02%, 50=0.47%, 100=2.10%, 250=97.36%, 500=0.05% 00:18:29.620 cpu : usr=0.14%, sys=1.35%, ctx=837, majf=0, minf=4097 00:18:29.620 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:18:29.620 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.620 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:29.620 issued rwts: total=4242,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.620 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:29.620 job8: (groupid=0, jobs=1): err= 0: pid=90855: Mon Nov 4 07:24:29 2024 00:18:29.620 read: IOPS=567, BW=142MiB/s (149MB/s)(1428MiB/10062msec) 00:18:29.620 slat (usec): min=18, max=138643, avg=1653.41, stdev=5925.75 00:18:29.620 clat (msec): min=39, max=210, avg=110.97, stdev=17.18 00:18:29.620 lat (msec): min=39, max=349, avg=112.62, stdev=18.21 00:18:29.620 clat percentiles (msec): 00:18:29.620 | 1.00th=[ 64], 5.00th=[ 87], 10.00th=[ 93], 20.00th=[ 102], 00:18:29.620 | 30.00th=[ 106], 40.00th=[ 109], 50.00th=[ 111], 60.00th=[ 114], 00:18:29.620 | 70.00th=[ 117], 80.00th=[ 121], 90.00th=[ 126], 95.00th=[ 134], 00:18:29.620 | 99.00th=[ 184], 99.50th=[ 190], 99.90th=[ 197], 99.95th=[ 209], 00:18:29.620 | 99.99th=[ 211] 00:18:29.620 bw ( KiB/s): min=117482, max=161280, per=8.83%, avg=144547.55, stdev=8813.11, samples=20 00:18:29.620 iops : min= 458, max= 630, avg=564.45, stdev=34.62, samples=20 00:18:29.620 lat (msec) : 50=0.60%, 100=17.53%, 250=81.87% 00:18:29.620 cpu : usr=0.34%, sys=1.69%, ctx=1246, majf=0, minf=4097 00:18:29.620 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:29.620 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.620 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:29.620 issued rwts: total=5710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.620 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:29.620 job9: (groupid=0, jobs=1): err= 0: pid=90856: Mon Nov 4 07:24:29 2024 00:18:29.620 read: IOPS=727, BW=182MiB/s (191MB/s)(1839MiB/10107msec) 00:18:29.620 slat (usec): min=15, max=75808, avg=1323.62, stdev=4815.69 00:18:29.620 clat (msec): min=44, max=251, avg=86.47, stdev=27.73 00:18:29.620 lat (msec): min=44, max=257, avg=87.79, stdev=28.35 00:18:29.620 clat percentiles (msec): 00:18:29.620 | 1.00th=[ 53], 5.00th=[ 61], 10.00th=[ 66], 20.00th=[ 70], 00:18:29.620 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 78], 60.00th=[ 81], 00:18:29.620 | 70.00th=[ 85], 80.00th=[ 92], 90.00th=[ 136], 95.00th=[ 148], 00:18:29.620 | 99.00th=[ 178], 99.50th=[ 199], 99.90th=[ 213], 99.95th=[ 251], 00:18:29.620 | 99.99th=[ 251] 00:18:29.620 bw ( KiB/s): min=99527, max=224256, per=11.40%, avg=186609.30, stdev=45195.33, samples=20 00:18:29.620 iops : min= 388, max= 876, avg=728.80, stdev=176.65, samples=20 00:18:29.620 lat (msec) : 50=0.48%, 100=82.30%, 250=17.17%, 500=0.05% 00:18:29.620 cpu : usr=0.24%, sys=2.72%, ctx=1370, majf=0, minf=4097 00:18:29.620 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:18:29.620 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.620 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:29.620 issued rwts: total=7355,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.620 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:29.620 job10: (groupid=0, jobs=1): err= 0: pid=90857: Mon Nov 4 07:24:29 2024 00:18:29.620 read: IOPS=732, BW=183MiB/s (192MB/s)(1848MiB/10093msec) 00:18:29.620 slat (usec): min=22, max=49307, avg=1348.41, stdev=4604.61 00:18:29.620 clat (msec): min=11, max=219, avg=85.80, stdev=27.05 00:18:29.620 lat (msec): min=11, max=219, avg=87.15, stdev=27.63 00:18:29.620 clat percentiles (msec): 00:18:29.620 | 1.00th=[ 31], 5.00th=[ 57], 10.00th=[ 64], 20.00th=[ 70], 00:18:29.620 | 30.00th=[ 74], 40.00th=[ 78], 50.00th=[ 80], 60.00th=[ 84], 00:18:29.620 | 70.00th=[ 87], 80.00th=[ 92], 90.00th=[ 136], 95.00th=[ 148], 00:18:29.620 | 99.00th=[ 165], 99.50th=[ 180], 99.90th=[ 207], 99.95th=[ 207], 00:18:29.620 | 99.99th=[ 220] 00:18:29.620 bw ( KiB/s): min=108544, max=258560, per=11.46%, avg=187601.80, stdev=43347.41, samples=20 00:18:29.620 iops : min= 424, max= 1010, avg=732.75, stdev=169.28, samples=20 00:18:29.620 lat (msec) : 20=0.32%, 50=3.06%, 100=81.12%, 250=15.50% 00:18:29.620 cpu : usr=0.34%, sys=2.82%, ctx=1517, majf=0, minf=4097 00:18:29.620 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:18:29.620 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.620 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:29.620 issued rwts: total=7393,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.620 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:29.620 00:18:29.620 Run status group 0 (all jobs): 00:18:29.620 READ: bw=1599MiB/s (1677MB/s), 104MiB/s-183MiB/s (109MB/s-192MB/s), io=15.8GiB (17.0GB), run=10062-10113msec 00:18:29.620 00:18:29.620 Disk stats (read/write): 00:18:29.620 nvme0n1: ios=11925/0, merge=0/0, ticks=1237971/0, in_queue=1237971, util=97.16% 00:18:29.620 nvme10n1: ios=14418/0, merge=0/0, ticks=1234497/0, in_queue=1234497, util=97.29% 00:18:29.620 nvme1n1: ios=14724/0, merge=0/0, ticks=1238729/0, in_queue=1238729, util=97.62% 00:18:29.620 nvme2n1: ios=8657/0, merge=0/0, ticks=1235289/0, in_queue=1235289, util=97.97% 00:18:29.620 nvme3n1: ios=11708/0, merge=0/0, ticks=1237179/0, in_queue=1237179, util=97.98% 00:18:29.620 nvme4n1: ios=9516/0, merge=0/0, ticks=1241492/0, in_queue=1241492, util=98.32% 00:18:29.620 nvme5n1: ios=8287/0, merge=0/0, ticks=1243295/0, in_queue=1243295, util=98.50% 00:18:29.620 nvme6n1: ios=8357/0, merge=0/0, ticks=1241456/0, in_queue=1241456, util=98.52% 00:18:29.620 nvme7n1: ios=11314/0, merge=0/0, ticks=1241185/0, in_queue=1241185, util=98.52% 00:18:29.620 nvme8n1: ios=14583/0, merge=0/0, ticks=1236647/0, in_queue=1236647, util=98.88% 00:18:29.620 nvme9n1: ios=14658/0, merge=0/0, ticks=1229602/0, in_queue=1229602, util=98.99% 00:18:29.621 07:24:29 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:18:29.621 [global] 00:18:29.621 thread=1 00:18:29.621 invalidate=1 00:18:29.621 rw=randwrite 00:18:29.621 time_based=1 00:18:29.621 runtime=10 00:18:29.621 ioengine=libaio 00:18:29.621 direct=1 00:18:29.621 bs=262144 00:18:29.621 iodepth=64 00:18:29.621 norandommap=1 00:18:29.621 numjobs=1 00:18:29.621 00:18:29.621 [job0] 00:18:29.621 filename=/dev/nvme0n1 00:18:29.621 [job1] 00:18:29.621 filename=/dev/nvme10n1 00:18:29.621 [job2] 00:18:29.621 filename=/dev/nvme1n1 00:18:29.621 [job3] 00:18:29.621 filename=/dev/nvme2n1 00:18:29.621 [job4] 00:18:29.621 filename=/dev/nvme3n1 00:18:29.621 [job5] 00:18:29.621 filename=/dev/nvme4n1 00:18:29.621 [job6] 00:18:29.621 filename=/dev/nvme5n1 00:18:29.621 [job7] 00:18:29.621 filename=/dev/nvme6n1 00:18:29.621 [job8] 00:18:29.621 filename=/dev/nvme7n1 00:18:29.621 [job9] 00:18:29.621 filename=/dev/nvme8n1 00:18:29.621 [job10] 00:18:29.621 filename=/dev/nvme9n1 00:18:29.621 Could not set queue depth (nvme0n1) 00:18:29.621 Could not set queue depth (nvme10n1) 00:18:29.621 Could not set queue depth (nvme1n1) 00:18:29.621 Could not set queue depth (nvme2n1) 00:18:29.621 Could not set queue depth (nvme3n1) 00:18:29.621 Could not set queue depth (nvme4n1) 00:18:29.621 Could not set queue depth (nvme5n1) 00:18:29.621 Could not set queue depth (nvme6n1) 00:18:29.621 Could not set queue depth (nvme7n1) 00:18:29.621 Could not set queue depth (nvme8n1) 00:18:29.621 Could not set queue depth (nvme9n1) 00:18:29.621 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:29.621 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:29.621 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:29.621 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:29.621 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:29.621 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:29.621 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:29.621 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:29.621 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:29.621 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:29.621 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:29.621 fio-3.35 00:18:29.621 Starting 11 threads 00:18:39.602 00:18:39.602 job0: (groupid=0, jobs=1): err= 0: pid=91058: Mon Nov 4 07:24:40 2024 00:18:39.602 write: IOPS=373, BW=93.4MiB/s (98.0MB/s)(949MiB/10155msec); 0 zone resets 00:18:39.602 slat (usec): min=18, max=32307, avg=2630.21, stdev=4545.54 00:18:39.602 clat (msec): min=3, max=336, avg=168.54, stdev=23.65 00:18:39.602 lat (msec): min=3, max=336, avg=171.17, stdev=23.55 00:18:39.602 clat percentiles (msec): 00:18:39.602 | 1.00th=[ 62], 5.00th=[ 138], 10.00th=[ 142], 20.00th=[ 163], 00:18:39.602 | 30.00th=[ 167], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 178], 00:18:39.602 | 70.00th=[ 178], 80.00th=[ 180], 90.00th=[ 180], 95.00th=[ 182], 00:18:39.602 | 99.00th=[ 234], 99.50th=[ 279], 99.90th=[ 326], 99.95th=[ 338], 00:18:39.602 | 99.99th=[ 338] 00:18:39.602 bw ( KiB/s): min=90112, max=116502, per=6.98%, avg=95575.25, stdev=8211.99, samples=20 00:18:39.602 iops : min= 352, max= 455, avg=373.00, stdev=32.08, samples=20 00:18:39.602 lat (msec) : 4=0.03%, 10=0.03%, 20=0.34%, 50=0.32%, 100=0.84% 00:18:39.602 lat (msec) : 250=97.65%, 500=0.79% 00:18:39.602 cpu : usr=1.00%, sys=1.06%, ctx=4016, majf=0, minf=1 00:18:39.602 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:18:39.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:39.602 issued rwts: total=0,3795,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.602 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:39.602 job1: (groupid=0, jobs=1): err= 0: pid=91059: Mon Nov 4 07:24:40 2024 00:18:39.602 write: IOPS=434, BW=109MiB/s (114MB/s)(1102MiB/10134msec); 0 zone resets 00:18:39.602 slat (usec): min=19, max=33270, avg=2265.67, stdev=3895.92 00:18:39.602 clat (msec): min=17, max=278, avg=144.87, stdev=13.08 00:18:39.602 lat (msec): min=17, max=279, avg=147.14, stdev=12.66 00:18:39.602 clat percentiles (msec): 00:18:39.602 | 1.00th=[ 128], 5.00th=[ 136], 10.00th=[ 138], 20.00th=[ 140], 00:18:39.602 | 30.00th=[ 144], 40.00th=[ 144], 50.00th=[ 146], 60.00th=[ 146], 00:18:39.602 | 70.00th=[ 148], 80.00th=[ 150], 90.00th=[ 150], 95.00th=[ 153], 00:18:39.602 | 99.00th=[ 176], 99.50th=[ 224], 99.90th=[ 271], 99.95th=[ 271], 00:18:39.602 | 99.99th=[ 279] 00:18:39.602 bw ( KiB/s): min=102400, max=116736, per=8.12%, avg=111169.75, stdev=2881.31, samples=20 00:18:39.602 iops : min= 400, max= 456, avg=434.25, stdev=11.26, samples=20 00:18:39.602 lat (msec) : 20=0.09%, 50=0.36%, 100=0.27%, 250=98.96%, 500=0.32% 00:18:39.602 cpu : usr=0.70%, sys=1.21%, ctx=5930, majf=0, minf=1 00:18:39.602 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:18:39.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:39.602 issued rwts: total=0,4406,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.602 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:39.602 job2: (groupid=0, jobs=1): err= 0: pid=91071: Mon Nov 4 07:24:40 2024 00:18:39.602 write: IOPS=372, BW=93.0MiB/s (97.5MB/s)(945MiB/10154msec); 0 zone resets 00:18:39.602 slat (usec): min=19, max=47498, avg=2644.41, stdev=4590.32 00:18:39.602 clat (msec): min=7, max=327, avg=169.23, stdev=20.30 00:18:39.602 lat (msec): min=7, max=327, avg=171.87, stdev=20.08 00:18:39.602 clat percentiles (msec): 00:18:39.602 | 1.00th=[ 117], 5.00th=[ 136], 10.00th=[ 142], 20.00th=[ 163], 00:18:39.602 | 30.00th=[ 167], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 178], 00:18:39.602 | 70.00th=[ 178], 80.00th=[ 180], 90.00th=[ 180], 95.00th=[ 182], 00:18:39.602 | 99.00th=[ 226], 99.50th=[ 271], 99.90th=[ 317], 99.95th=[ 330], 00:18:39.602 | 99.99th=[ 330] 00:18:39.602 bw ( KiB/s): min=90112, max=114688, per=6.94%, avg=95066.95, stdev=7216.08, samples=20 00:18:39.602 iops : min= 352, max= 448, avg=371.35, stdev=28.17, samples=20 00:18:39.602 lat (msec) : 10=0.05%, 20=0.08%, 50=0.11%, 100=0.45%, 250=98.62% 00:18:39.602 lat (msec) : 500=0.69% 00:18:39.602 cpu : usr=1.06%, sys=0.98%, ctx=3730, majf=0, minf=1 00:18:39.602 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:18:39.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:39.602 issued rwts: total=0,3778,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.602 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:39.602 job3: (groupid=0, jobs=1): err= 0: pid=91072: Mon Nov 4 07:24:40 2024 00:18:39.602 write: IOPS=635, BW=159MiB/s (167MB/s)(1602MiB/10078msec); 0 zone resets 00:18:39.602 slat (usec): min=27, max=26239, avg=1528.62, stdev=2645.61 00:18:39.602 clat (msec): min=28, max=184, avg=99.10, stdev=16.28 00:18:39.602 lat (msec): min=28, max=186, avg=100.63, stdev=16.28 00:18:39.602 clat percentiles (msec): 00:18:39.602 | 1.00th=[ 83], 5.00th=[ 89], 10.00th=[ 90], 20.00th=[ 91], 00:18:39.602 | 30.00th=[ 93], 40.00th=[ 94], 50.00th=[ 95], 60.00th=[ 96], 00:18:39.602 | 70.00th=[ 96], 80.00th=[ 97], 90.00th=[ 134], 95.00th=[ 144], 00:18:39.602 | 99.00th=[ 148], 99.50th=[ 159], 99.90th=[ 174], 99.95th=[ 182], 00:18:39.602 | 99.99th=[ 186] 00:18:39.602 bw ( KiB/s): min=114688, max=177152, per=11.86%, avg=162406.40, stdev=20934.81, samples=20 00:18:39.602 iops : min= 448, max= 692, avg=634.40, stdev=81.78, samples=20 00:18:39.602 lat (msec) : 50=0.16%, 100=87.31%, 250=12.53% 00:18:39.602 cpu : usr=1.74%, sys=1.76%, ctx=8554, majf=0, minf=1 00:18:39.602 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:18:39.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:39.602 issued rwts: total=0,6407,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.602 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:39.602 job4: (groupid=0, jobs=1): err= 0: pid=91073: Mon Nov 4 07:24:40 2024 00:18:39.602 write: IOPS=631, BW=158MiB/s (166MB/s)(1593MiB/10089msec); 0 zone resets 00:18:39.602 slat (usec): min=19, max=31737, avg=1563.23, stdev=2703.75 00:18:39.602 clat (msec): min=7, max=184, avg=99.69, stdev=17.63 00:18:39.602 lat (msec): min=8, max=184, avg=101.25, stdev=17.69 00:18:39.602 clat percentiles (msec): 00:18:39.602 | 1.00th=[ 87], 5.00th=[ 89], 10.00th=[ 90], 20.00th=[ 91], 00:18:39.602 | 30.00th=[ 93], 40.00th=[ 94], 50.00th=[ 95], 60.00th=[ 96], 00:18:39.602 | 70.00th=[ 96], 80.00th=[ 97], 90.00th=[ 136], 95.00th=[ 144], 00:18:39.602 | 99.00th=[ 165], 99.50th=[ 174], 99.90th=[ 180], 99.95th=[ 180], 00:18:39.602 | 99.99th=[ 184] 00:18:39.602 bw ( KiB/s): min=104144, max=176480, per=11.80%, avg=161633.20, stdev=22775.67, samples=20 00:18:39.602 iops : min= 406, max= 689, avg=631.20, stdev=89.05, samples=20 00:18:39.602 lat (msec) : 10=0.05%, 50=0.25%, 100=87.01%, 250=12.69% 00:18:39.602 cpu : usr=1.76%, sys=1.83%, ctx=5662, majf=0, minf=1 00:18:39.602 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:39.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:39.602 issued rwts: total=0,6373,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.602 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:39.602 job5: (groupid=0, jobs=1): err= 0: pid=91074: Mon Nov 4 07:24:40 2024 00:18:39.602 write: IOPS=402, BW=101MiB/s (106MB/s)(1022MiB/10151msec); 0 zone resets 00:18:39.602 slat (usec): min=20, max=38176, avg=2393.22, stdev=4325.02 00:18:39.602 clat (msec): min=13, max=330, avg=156.42, stdev=36.94 00:18:39.602 lat (msec): min=13, max=330, avg=158.82, stdev=37.33 00:18:39.602 clat percentiles (msec): 00:18:39.602 | 1.00th=[ 47], 5.00th=[ 97], 10.00th=[ 101], 20.00th=[ 105], 00:18:39.602 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 178], 00:18:39.602 | 70.00th=[ 178], 80.00th=[ 180], 90.00th=[ 180], 95.00th=[ 182], 00:18:39.602 | 99.00th=[ 218], 99.50th=[ 271], 99.90th=[ 321], 99.95th=[ 321], 00:18:39.602 | 99.99th=[ 330] 00:18:39.602 bw ( KiB/s): min=88576, max=166067, per=7.52%, avg=103064.50, stdev=24998.71, samples=20 00:18:39.602 iops : min= 346, max= 648, avg=402.55, stdev=97.53, samples=20 00:18:39.602 lat (msec) : 20=0.17%, 50=1.00%, 100=8.66%, 250=89.44%, 500=0.73% 00:18:39.602 cpu : usr=1.05%, sys=1.12%, ctx=5261, majf=0, minf=1 00:18:39.602 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:18:39.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:39.602 issued rwts: total=0,4089,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.602 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:39.602 job6: (groupid=0, jobs=1): err= 0: pid=91075: Mon Nov 4 07:24:40 2024 00:18:39.602 write: IOPS=435, BW=109MiB/s (114MB/s)(1104MiB/10143msec); 0 zone resets 00:18:39.602 slat (usec): min=19, max=30400, avg=2259.88, stdev=3883.85 00:18:39.602 clat (msec): min=3, max=285, avg=144.63, stdev=15.23 00:18:39.602 lat (msec): min=3, max=285, avg=146.89, stdev=14.94 00:18:39.602 clat percentiles (msec): 00:18:39.602 | 1.00th=[ 85], 5.00th=[ 136], 10.00th=[ 138], 20.00th=[ 140], 00:18:39.602 | 30.00th=[ 144], 40.00th=[ 144], 50.00th=[ 146], 60.00th=[ 146], 00:18:39.602 | 70.00th=[ 148], 80.00th=[ 150], 90.00th=[ 153], 95.00th=[ 153], 00:18:39.603 | 99.00th=[ 184], 99.50th=[ 230], 99.90th=[ 275], 99.95th=[ 275], 00:18:39.603 | 99.99th=[ 288] 00:18:39.603 bw ( KiB/s): min=108327, max=114688, per=8.14%, avg=111504.50, stdev=1833.73, samples=20 00:18:39.603 iops : min= 423, max= 448, avg=435.20, stdev= 7.08, samples=20 00:18:39.603 lat (msec) : 4=0.09%, 20=0.20%, 50=0.27%, 100=0.63%, 250=98.48% 00:18:39.603 lat (msec) : 500=0.32% 00:18:39.603 cpu : usr=0.85%, sys=1.25%, ctx=6381, majf=0, minf=1 00:18:39.603 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:18:39.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:39.603 issued rwts: total=0,4417,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.603 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:39.603 job7: (groupid=0, jobs=1): err= 0: pid=91076: Mon Nov 4 07:24:40 2024 00:18:39.603 write: IOPS=647, BW=162MiB/s (170MB/s)(1632MiB/10080msec); 0 zone resets 00:18:39.603 slat (usec): min=17, max=53556, avg=1526.50, stdev=2658.06 00:18:39.603 clat (msec): min=56, max=176, avg=97.27, stdev= 7.13 00:18:39.603 lat (msec): min=56, max=176, avg=98.79, stdev= 6.78 00:18:39.603 clat percentiles (msec): 00:18:39.603 | 1.00th=[ 73], 5.00th=[ 91], 10.00th=[ 92], 20.00th=[ 93], 00:18:39.603 | 30.00th=[ 96], 40.00th=[ 97], 50.00th=[ 99], 60.00th=[ 99], 00:18:39.603 | 70.00th=[ 100], 80.00th=[ 101], 90.00th=[ 103], 95.00th=[ 105], 00:18:39.603 | 99.00th=[ 126], 99.50th=[ 131], 99.90th=[ 165], 99.95th=[ 171], 00:18:39.603 | 99.99th=[ 176] 00:18:39.603 bw ( KiB/s): min=147456, max=179200, per=12.08%, avg=165488.85, stdev=6833.29, samples=20 00:18:39.603 iops : min= 576, max= 700, avg=646.40, stdev=26.78, samples=20 00:18:39.603 lat (msec) : 100=83.52%, 250=16.48% 00:18:39.603 cpu : usr=1.23%, sys=1.86%, ctx=7834, majf=0, minf=1 00:18:39.603 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:18:39.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:39.603 issued rwts: total=0,6528,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.603 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:39.603 job8: (groupid=0, jobs=1): err= 0: pid=91077: Mon Nov 4 07:24:40 2024 00:18:39.603 write: IOPS=624, BW=156MiB/s (164MB/s)(1577MiB/10090msec); 0 zone resets 00:18:39.603 slat (usec): min=19, max=14267, avg=1561.00, stdev=2730.55 00:18:39.603 clat (msec): min=3, max=186, avg=100.81, stdev=17.84 00:18:39.603 lat (msec): min=3, max=186, avg=102.37, stdev=17.95 00:18:39.603 clat percentiles (msec): 00:18:39.603 | 1.00th=[ 42], 5.00th=[ 91], 10.00th=[ 92], 20.00th=[ 93], 00:18:39.603 | 30.00th=[ 95], 40.00th=[ 97], 50.00th=[ 99], 60.00th=[ 99], 00:18:39.603 | 70.00th=[ 100], 80.00th=[ 101], 90.00th=[ 136], 95.00th=[ 144], 00:18:39.603 | 99.00th=[ 150], 99.50th=[ 161], 99.90th=[ 176], 99.95th=[ 182], 00:18:39.603 | 99.99th=[ 186] 00:18:39.603 bw ( KiB/s): min=108327, max=185715, per=11.67%, avg=159930.75, stdev=20888.08, samples=20 00:18:39.603 iops : min= 423, max= 725, avg=624.55, stdev=81.65, samples=20 00:18:39.603 lat (msec) : 4=0.05%, 10=0.13%, 20=0.21%, 50=0.87%, 100=78.91% 00:18:39.603 lat (msec) : 250=19.84% 00:18:39.603 cpu : usr=1.03%, sys=1.66%, ctx=8310, majf=0, minf=1 00:18:39.603 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:39.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:39.603 issued rwts: total=0,6306,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.603 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:39.603 job9: (groupid=0, jobs=1): err= 0: pid=91078: Mon Nov 4 07:24:40 2024 00:18:39.603 write: IOPS=435, BW=109MiB/s (114MB/s)(1103MiB/10133msec); 0 zone resets 00:18:39.603 slat (usec): min=19, max=33993, avg=2262.54, stdev=3886.79 00:18:39.603 clat (msec): min=36, max=277, avg=144.70, stdev=12.52 00:18:39.603 lat (msec): min=36, max=277, avg=146.97, stdev=12.10 00:18:39.603 clat percentiles (msec): 00:18:39.603 | 1.00th=[ 123], 5.00th=[ 136], 10.00th=[ 138], 20.00th=[ 140], 00:18:39.603 | 30.00th=[ 142], 40.00th=[ 144], 50.00th=[ 146], 60.00th=[ 146], 00:18:39.603 | 70.00th=[ 148], 80.00th=[ 150], 90.00th=[ 150], 95.00th=[ 153], 00:18:39.603 | 99.00th=[ 176], 99.50th=[ 224], 99.90th=[ 271], 99.95th=[ 271], 00:18:39.603 | 99.99th=[ 279] 00:18:39.603 bw ( KiB/s): min=104657, max=116736, per=8.13%, avg=111308.20, stdev=2495.65, samples=20 00:18:39.603 iops : min= 408, max= 456, avg=434.75, stdev= 9.87, samples=20 00:18:39.603 lat (msec) : 50=0.18%, 100=0.63%, 250=98.96%, 500=0.23% 00:18:39.603 cpu : usr=0.84%, sys=1.28%, ctx=6210, majf=0, minf=1 00:18:39.603 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:18:39.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:39.603 issued rwts: total=0,4411,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.603 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:39.603 job10: (groupid=0, jobs=1): err= 0: pid=91079: Mon Nov 4 07:24:40 2024 00:18:39.603 write: IOPS=377, BW=94.4MiB/s (99.0MB/s)(958MiB/10143msec); 0 zone resets 00:18:39.603 slat (usec): min=19, max=14604, avg=2568.49, stdev=4495.51 00:18:39.603 clat (msec): min=14, max=320, avg=166.80, stdev=24.76 00:18:39.603 lat (msec): min=14, max=320, avg=169.37, stdev=24.84 00:18:39.603 clat percentiles (msec): 00:18:39.603 | 1.00th=[ 67], 5.00th=[ 132], 10.00th=[ 140], 20.00th=[ 163], 00:18:39.603 | 30.00th=[ 167], 40.00th=[ 171], 50.00th=[ 176], 60.00th=[ 178], 00:18:39.603 | 70.00th=[ 178], 80.00th=[ 180], 90.00th=[ 180], 95.00th=[ 182], 00:18:39.603 | 99.00th=[ 207], 99.50th=[ 264], 99.90th=[ 309], 99.95th=[ 321], 00:18:39.603 | 99.99th=[ 321] 00:18:39.603 bw ( KiB/s): min=88576, max=131817, per=7.04%, avg=96409.80, stdev=10892.37, samples=20 00:18:39.603 iops : min= 346, max= 514, avg=376.55, stdev=42.38, samples=20 00:18:39.603 lat (msec) : 20=0.10%, 50=0.52%, 100=1.80%, 250=96.89%, 500=0.68% 00:18:39.603 cpu : usr=1.05%, sys=1.03%, ctx=5552, majf=0, minf=1 00:18:39.603 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:18:39.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:39.603 issued rwts: total=0,3831,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.603 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:39.603 00:18:39.603 Run status group 0 (all jobs): 00:18:39.603 WRITE: bw=1338MiB/s (1403MB/s), 93.0MiB/s-162MiB/s (97.5MB/s-170MB/s), io=13.3GiB (14.2GB), run=10078-10155msec 00:18:39.603 00:18:39.603 Disk stats (read/write): 00:18:39.603 nvme0n1: ios=49/7470, merge=0/0, ticks=57/1212677, in_queue=1212734, util=97.97% 00:18:39.603 nvme10n1: ios=49/8673, merge=0/0, ticks=46/1211833, in_queue=1211879, util=98.05% 00:18:39.603 nvme1n1: ios=30/7421, merge=0/0, ticks=38/1210130, in_queue=1210168, util=98.00% 00:18:39.603 nvme2n1: ios=5/12663, merge=0/0, ticks=10/1214869, in_queue=1214879, util=97.92% 00:18:39.603 nvme3n1: ios=25/12625, merge=0/0, ticks=65/1216226, in_queue=1216291, util=98.39% 00:18:39.603 nvme4n1: ios=0/8046, merge=0/0, ticks=0/1211489, in_queue=1211489, util=98.27% 00:18:39.603 nvme5n1: ios=0/8710, merge=0/0, ticks=0/1214698, in_queue=1214698, util=98.52% 00:18:39.603 nvme6n1: ios=0/12909, merge=0/0, ticks=0/1214827, in_queue=1214827, util=98.41% 00:18:39.603 nvme7n1: ios=0/12487, merge=0/0, ticks=0/1217696, in_queue=1217696, util=98.83% 00:18:39.603 nvme8n1: ios=0/8684, merge=0/0, ticks=0/1212227, in_queue=1212227, util=98.79% 00:18:39.603 nvme9n1: ios=0/7520, merge=0/0, ticks=0/1209403, in_queue=1209403, util=98.80% 00:18:39.603 07:24:40 -- target/multiconnection.sh@36 -- # sync 00:18:39.603 07:24:40 -- target/multiconnection.sh@37 -- # seq 1 11 00:18:39.603 07:24:40 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:39.603 07:24:40 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:39.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:39.603 07:24:40 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:18:39.603 07:24:40 -- common/autotest_common.sh@1198 -- # local i=0 00:18:39.603 07:24:40 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:39.603 07:24:40 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:18:39.603 07:24:40 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:39.603 07:24:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:18:39.603 07:24:40 -- common/autotest_common.sh@1210 -- # return 0 00:18:39.603 07:24:40 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:39.603 07:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:39.603 07:24:40 -- common/autotest_common.sh@10 -- # set +x 00:18:39.603 07:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:39.603 07:24:40 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:39.603 07:24:40 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:18:39.603 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:18:39.603 07:24:40 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:18:39.603 07:24:40 -- common/autotest_common.sh@1198 -- # local i=0 00:18:39.603 07:24:40 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:39.603 07:24:40 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:18:39.603 07:24:40 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:39.603 07:24:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:18:39.603 07:24:40 -- common/autotest_common.sh@1210 -- # return 0 00:18:39.603 07:24:40 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:39.603 07:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:39.603 07:24:40 -- common/autotest_common.sh@10 -- # set +x 00:18:39.603 07:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:39.603 07:24:40 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:39.603 07:24:40 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:18:39.603 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:18:39.603 07:24:40 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:18:39.603 07:24:40 -- common/autotest_common.sh@1198 -- # local i=0 00:18:39.603 07:24:40 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:39.603 07:24:40 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:18:39.603 07:24:40 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:39.603 07:24:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:18:39.603 07:24:40 -- common/autotest_common.sh@1210 -- # return 0 00:18:39.603 07:24:40 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:18:39.603 07:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:39.603 07:24:40 -- common/autotest_common.sh@10 -- # set +x 00:18:39.603 07:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:39.604 07:24:40 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:39.604 07:24:40 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:18:39.604 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:18:39.604 07:24:40 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:18:39.604 07:24:40 -- common/autotest_common.sh@1198 -- # local i=0 00:18:39.604 07:24:40 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:39.604 07:24:40 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:18:39.604 07:24:40 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:39.604 07:24:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:18:39.604 07:24:40 -- common/autotest_common.sh@1210 -- # return 0 00:18:39.604 07:24:40 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:18:39.604 07:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:39.604 07:24:40 -- common/autotest_common.sh@10 -- # set +x 00:18:39.604 07:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:39.604 07:24:40 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:39.604 07:24:40 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:18:39.604 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:18:39.604 07:24:40 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:18:39.604 07:24:40 -- common/autotest_common.sh@1198 -- # local i=0 00:18:39.604 07:24:40 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:39.604 07:24:40 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:18:39.604 07:24:40 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:39.604 07:24:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:18:39.604 07:24:40 -- common/autotest_common.sh@1210 -- # return 0 00:18:39.604 07:24:40 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:18:39.604 07:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:39.604 07:24:40 -- common/autotest_common.sh@10 -- # set +x 00:18:39.604 07:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:39.604 07:24:40 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:39.604 07:24:40 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:18:39.604 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:18:39.604 07:24:40 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:18:39.604 07:24:41 -- common/autotest_common.sh@1198 -- # local i=0 00:18:39.604 07:24:41 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:39.604 07:24:41 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:18:39.604 07:24:41 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:39.604 07:24:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:18:39.604 07:24:41 -- common/autotest_common.sh@1210 -- # return 0 00:18:39.604 07:24:41 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:18:39.604 07:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:39.604 07:24:41 -- common/autotest_common.sh@10 -- # set +x 00:18:39.604 07:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:39.604 07:24:41 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:39.604 07:24:41 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:18:39.604 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:18:39.604 07:24:41 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:18:39.604 07:24:41 -- common/autotest_common.sh@1198 -- # local i=0 00:18:39.604 07:24:41 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:39.604 07:24:41 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:18:39.604 07:24:41 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:39.604 07:24:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:18:39.604 07:24:41 -- common/autotest_common.sh@1210 -- # return 0 00:18:39.604 07:24:41 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:18:39.604 07:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:39.604 07:24:41 -- common/autotest_common.sh@10 -- # set +x 00:18:39.604 07:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:39.604 07:24:41 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:39.604 07:24:41 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:18:39.604 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:18:39.604 07:24:41 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:18:39.604 07:24:41 -- common/autotest_common.sh@1198 -- # local i=0 00:18:39.604 07:24:41 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:39.604 07:24:41 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:18:39.604 07:24:41 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:39.604 07:24:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:18:39.604 07:24:41 -- common/autotest_common.sh@1210 -- # return 0 00:18:39.604 07:24:41 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:18:39.604 07:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:39.604 07:24:41 -- common/autotest_common.sh@10 -- # set +x 00:18:39.604 07:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:39.604 07:24:41 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:39.604 07:24:41 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:18:39.604 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:18:39.604 07:24:41 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:18:39.604 07:24:41 -- common/autotest_common.sh@1198 -- # local i=0 00:18:39.604 07:24:41 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:39.604 07:24:41 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:18:39.604 07:24:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:18:39.604 07:24:41 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:39.604 07:24:41 -- common/autotest_common.sh@1210 -- # return 0 00:18:39.604 07:24:41 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:18:39.604 07:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:39.604 07:24:41 -- common/autotest_common.sh@10 -- # set +x 00:18:39.604 07:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:39.604 07:24:41 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:39.604 07:24:41 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:18:39.863 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:18:39.863 07:24:41 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:18:39.863 07:24:41 -- common/autotest_common.sh@1198 -- # local i=0 00:18:39.863 07:24:41 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:39.863 07:24:41 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:18:39.863 07:24:41 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:39.863 07:24:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:18:39.863 07:24:41 -- common/autotest_common.sh@1210 -- # return 0 00:18:39.863 07:24:41 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:18:39.863 07:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:39.863 07:24:41 -- common/autotest_common.sh@10 -- # set +x 00:18:39.863 07:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:39.863 07:24:41 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:39.863 07:24:41 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:18:40.122 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:18:40.122 07:24:41 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:18:40.122 07:24:41 -- common/autotest_common.sh@1198 -- # local i=0 00:18:40.122 07:24:41 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:40.122 07:24:41 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:18:40.122 07:24:41 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:40.122 07:24:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:18:40.122 07:24:41 -- common/autotest_common.sh@1210 -- # return 0 00:18:40.122 07:24:41 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:18:40.122 07:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:40.122 07:24:41 -- common/autotest_common.sh@10 -- # set +x 00:18:40.122 07:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:40.122 07:24:41 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:18:40.122 07:24:41 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:18:40.122 07:24:41 -- target/multiconnection.sh@47 -- # nvmftestfini 00:18:40.122 07:24:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:40.122 07:24:41 -- nvmf/common.sh@116 -- # sync 00:18:40.122 07:24:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:40.122 07:24:41 -- nvmf/common.sh@119 -- # set +e 00:18:40.122 07:24:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:40.122 07:24:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:40.122 rmmod nvme_tcp 00:18:40.122 rmmod nvme_fabrics 00:18:40.122 rmmod nvme_keyring 00:18:40.122 07:24:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:40.122 07:24:41 -- nvmf/common.sh@123 -- # set -e 00:18:40.122 07:24:41 -- nvmf/common.sh@124 -- # return 0 00:18:40.122 07:24:41 -- nvmf/common.sh@477 -- # '[' -n 90363 ']' 00:18:40.122 07:24:41 -- nvmf/common.sh@478 -- # killprocess 90363 00:18:40.122 07:24:41 -- common/autotest_common.sh@926 -- # '[' -z 90363 ']' 00:18:40.122 07:24:41 -- common/autotest_common.sh@930 -- # kill -0 90363 00:18:40.122 07:24:41 -- common/autotest_common.sh@931 -- # uname 00:18:40.122 07:24:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:40.122 07:24:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 90363 00:18:40.122 killing process with pid 90363 00:18:40.122 07:24:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:40.122 07:24:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:40.122 07:24:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 90363' 00:18:40.122 07:24:41 -- common/autotest_common.sh@945 -- # kill 90363 00:18:40.122 07:24:41 -- common/autotest_common.sh@950 -- # wait 90363 00:18:40.689 07:24:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:40.689 07:24:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:40.689 07:24:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:40.689 07:24:42 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:40.689 07:24:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:40.689 07:24:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.689 07:24:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:40.689 07:24:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.689 07:24:42 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:40.689 00:18:40.689 real 0m50.274s 00:18:40.689 user 2m48.810s 00:18:40.689 sys 0m25.283s 00:18:40.689 07:24:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:40.689 07:24:42 -- common/autotest_common.sh@10 -- # set +x 00:18:40.689 ************************************ 00:18:40.689 END TEST nvmf_multiconnection 00:18:40.689 ************************************ 00:18:40.948 07:24:42 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:18:40.948 07:24:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:40.948 07:24:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:40.948 07:24:42 -- common/autotest_common.sh@10 -- # set +x 00:18:40.948 ************************************ 00:18:40.948 START TEST nvmf_initiator_timeout 00:18:40.948 ************************************ 00:18:40.948 07:24:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:18:40.948 * Looking for test storage... 00:18:40.948 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:40.948 07:24:42 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:40.948 07:24:42 -- nvmf/common.sh@7 -- # uname -s 00:18:40.948 07:24:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:40.948 07:24:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:40.948 07:24:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:40.948 07:24:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:40.948 07:24:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:40.948 07:24:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:40.948 07:24:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:40.948 07:24:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:40.948 07:24:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:40.948 07:24:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:40.948 07:24:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:18:40.948 07:24:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:18:40.948 07:24:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:40.948 07:24:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:40.948 07:24:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:40.948 07:24:42 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:40.948 07:24:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:40.948 07:24:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:40.948 07:24:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:40.948 07:24:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.948 07:24:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.948 07:24:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.948 07:24:42 -- paths/export.sh@5 -- # export PATH 00:18:40.948 07:24:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.948 07:24:42 -- nvmf/common.sh@46 -- # : 0 00:18:40.948 07:24:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:40.948 07:24:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:40.948 07:24:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:40.948 07:24:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:40.948 07:24:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:40.948 07:24:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:40.948 07:24:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:40.948 07:24:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:40.948 07:24:42 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:40.948 07:24:42 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:40.948 07:24:42 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:18:40.948 07:24:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:40.948 07:24:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:40.948 07:24:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:40.948 07:24:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:40.948 07:24:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:40.948 07:24:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.948 07:24:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:40.948 07:24:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.948 07:24:42 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:40.948 07:24:42 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:40.948 07:24:42 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:40.948 07:24:42 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:40.948 07:24:42 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:40.948 07:24:42 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:40.948 07:24:42 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:40.948 07:24:42 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:40.948 07:24:42 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:40.948 07:24:42 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:40.948 07:24:42 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:40.948 07:24:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:40.948 07:24:42 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:40.948 07:24:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:40.948 07:24:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:40.948 07:24:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:40.948 07:24:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:40.948 07:24:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:40.948 07:24:42 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:40.948 07:24:42 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:40.948 Cannot find device "nvmf_tgt_br" 00:18:40.948 07:24:42 -- nvmf/common.sh@154 -- # true 00:18:40.948 07:24:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:40.948 Cannot find device "nvmf_tgt_br2" 00:18:40.948 07:24:42 -- nvmf/common.sh@155 -- # true 00:18:40.948 07:24:42 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:40.948 07:24:42 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:40.948 Cannot find device "nvmf_tgt_br" 00:18:40.948 07:24:42 -- nvmf/common.sh@157 -- # true 00:18:40.948 07:24:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:40.948 Cannot find device "nvmf_tgt_br2" 00:18:40.948 07:24:42 -- nvmf/common.sh@158 -- # true 00:18:40.948 07:24:42 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:40.948 07:24:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:40.948 07:24:42 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:40.948 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:40.949 07:24:42 -- nvmf/common.sh@161 -- # true 00:18:40.949 07:24:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:40.949 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:40.949 07:24:42 -- nvmf/common.sh@162 -- # true 00:18:40.949 07:24:42 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:41.207 07:24:42 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:41.207 07:24:42 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:41.207 07:24:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:41.207 07:24:42 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:41.207 07:24:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:41.207 07:24:42 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:41.207 07:24:42 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:41.207 07:24:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:41.207 07:24:42 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:41.208 07:24:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:41.208 07:24:42 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:41.208 07:24:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:41.208 07:24:42 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:41.208 07:24:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:41.208 07:24:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:41.208 07:24:42 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:41.208 07:24:42 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:41.208 07:24:42 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:41.208 07:24:42 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:41.208 07:24:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:41.208 07:24:42 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:41.208 07:24:42 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:41.208 07:24:42 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:41.208 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:41.208 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:18:41.208 00:18:41.208 --- 10.0.0.2 ping statistics --- 00:18:41.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.208 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:18:41.208 07:24:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:41.208 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:41.208 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:18:41.208 00:18:41.208 --- 10.0.0.3 ping statistics --- 00:18:41.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.208 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:18:41.208 07:24:42 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:41.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:41.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:18:41.208 00:18:41.208 --- 10.0.0.1 ping statistics --- 00:18:41.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.208 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:18:41.208 07:24:42 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:41.208 07:24:42 -- nvmf/common.sh@421 -- # return 0 00:18:41.208 07:24:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:41.208 07:24:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:41.208 07:24:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:41.208 07:24:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:41.208 07:24:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:41.208 07:24:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:41.208 07:24:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:41.208 07:24:42 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:18:41.208 07:24:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:41.208 07:24:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:41.208 07:24:42 -- common/autotest_common.sh@10 -- # set +x 00:18:41.208 07:24:42 -- nvmf/common.sh@469 -- # nvmfpid=91449 00:18:41.208 07:24:42 -- nvmf/common.sh@470 -- # waitforlisten 91449 00:18:41.208 07:24:42 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:41.208 07:24:42 -- common/autotest_common.sh@819 -- # '[' -z 91449 ']' 00:18:41.208 07:24:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.208 07:24:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:41.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.208 07:24:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.208 07:24:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:41.208 07:24:42 -- common/autotest_common.sh@10 -- # set +x 00:18:41.208 [2024-11-04 07:24:43.032157] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:18:41.208 [2024-11-04 07:24:43.032220] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:41.467 [2024-11-04 07:24:43.168565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:41.467 [2024-11-04 07:24:43.244785] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:41.467 [2024-11-04 07:24:43.244994] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:41.467 [2024-11-04 07:24:43.245014] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:41.467 [2024-11-04 07:24:43.245027] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:41.467 [2024-11-04 07:24:43.245141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.467 [2024-11-04 07:24:43.245592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:41.467 [2024-11-04 07:24:43.245722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:41.467 [2024-11-04 07:24:43.245734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.403 07:24:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:42.403 07:24:44 -- common/autotest_common.sh@852 -- # return 0 00:18:42.403 07:24:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:42.403 07:24:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:42.403 07:24:44 -- common/autotest_common.sh@10 -- # set +x 00:18:42.403 07:24:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:42.403 07:24:44 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:42.403 07:24:44 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:42.403 07:24:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:42.403 07:24:44 -- common/autotest_common.sh@10 -- # set +x 00:18:42.403 Malloc0 00:18:42.403 07:24:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:42.403 07:24:44 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:18:42.403 07:24:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:42.403 07:24:44 -- common/autotest_common.sh@10 -- # set +x 00:18:42.403 Delay0 00:18:42.403 07:24:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:42.403 07:24:44 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:42.403 07:24:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:42.403 07:24:44 -- common/autotest_common.sh@10 -- # set +x 00:18:42.403 [2024-11-04 07:24:44.157061] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:42.403 07:24:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:42.403 07:24:44 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:42.403 07:24:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:42.403 07:24:44 -- common/autotest_common.sh@10 -- # set +x 00:18:42.403 07:24:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:42.403 07:24:44 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:42.403 07:24:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:42.403 07:24:44 -- common/autotest_common.sh@10 -- # set +x 00:18:42.403 07:24:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:42.403 07:24:44 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:42.403 07:24:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:42.403 07:24:44 -- common/autotest_common.sh@10 -- # set +x 00:18:42.403 [2024-11-04 07:24:44.185343] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:42.403 07:24:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:42.403 07:24:44 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:42.662 07:24:44 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:18:42.662 07:24:44 -- common/autotest_common.sh@1177 -- # local i=0 00:18:42.662 07:24:44 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:42.662 07:24:44 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:42.662 07:24:44 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:44.564 07:24:46 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:44.564 07:24:46 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:44.564 07:24:46 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:18:44.564 07:24:46 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:44.564 07:24:46 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:44.564 07:24:46 -- common/autotest_common.sh@1187 -- # return 0 00:18:44.564 07:24:46 -- target/initiator_timeout.sh@35 -- # fio_pid=91531 00:18:44.564 07:24:46 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:18:44.564 07:24:46 -- target/initiator_timeout.sh@37 -- # sleep 3 00:18:44.823 [global] 00:18:44.823 thread=1 00:18:44.823 invalidate=1 00:18:44.823 rw=write 00:18:44.823 time_based=1 00:18:44.823 runtime=60 00:18:44.823 ioengine=libaio 00:18:44.823 direct=1 00:18:44.823 bs=4096 00:18:44.823 iodepth=1 00:18:44.823 norandommap=0 00:18:44.823 numjobs=1 00:18:44.823 00:18:44.823 verify_dump=1 00:18:44.823 verify_backlog=512 00:18:44.823 verify_state_save=0 00:18:44.823 do_verify=1 00:18:44.823 verify=crc32c-intel 00:18:44.823 [job0] 00:18:44.823 filename=/dev/nvme0n1 00:18:44.823 Could not set queue depth (nvme0n1) 00:18:44.823 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:44.823 fio-3.35 00:18:44.823 Starting 1 thread 00:18:48.108 07:24:49 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:18:48.108 07:24:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:48.108 07:24:49 -- common/autotest_common.sh@10 -- # set +x 00:18:48.108 true 00:18:48.108 07:24:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:48.108 07:24:49 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:18:48.108 07:24:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:48.108 07:24:49 -- common/autotest_common.sh@10 -- # set +x 00:18:48.108 true 00:18:48.108 07:24:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:48.108 07:24:49 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:18:48.108 07:24:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:48.108 07:24:49 -- common/autotest_common.sh@10 -- # set +x 00:18:48.108 true 00:18:48.108 07:24:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:48.108 07:24:49 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:18:48.108 07:24:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:48.108 07:24:49 -- common/autotest_common.sh@10 -- # set +x 00:18:48.108 true 00:18:48.108 07:24:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:48.108 07:24:49 -- target/initiator_timeout.sh@45 -- # sleep 3 00:18:50.668 07:24:52 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:18:50.668 07:24:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.668 07:24:52 -- common/autotest_common.sh@10 -- # set +x 00:18:50.668 true 00:18:50.668 07:24:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:50.668 07:24:52 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:18:50.668 07:24:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.668 07:24:52 -- common/autotest_common.sh@10 -- # set +x 00:18:50.668 true 00:18:50.668 07:24:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:50.668 07:24:52 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:18:50.668 07:24:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.668 07:24:52 -- common/autotest_common.sh@10 -- # set +x 00:18:50.668 true 00:18:50.668 07:24:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:50.668 07:24:52 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:18:50.668 07:24:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.668 07:24:52 -- common/autotest_common.sh@10 -- # set +x 00:18:50.668 true 00:18:50.668 07:24:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:50.668 07:24:52 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:18:50.668 07:24:52 -- target/initiator_timeout.sh@54 -- # wait 91531 00:19:46.892 00:19:46.893 job0: (groupid=0, jobs=1): err= 0: pid=91552: Mon Nov 4 07:25:46 2024 00:19:46.893 read: IOPS=810, BW=3243KiB/s (3320kB/s)(190MiB/60000msec) 00:19:46.893 slat (nsec): min=11240, max=73006, avg=13387.06, stdev=3589.19 00:19:46.893 clat (usec): min=155, max=1316, avg=201.14, stdev=21.47 00:19:46.893 lat (usec): min=167, max=1330, avg=214.53, stdev=22.31 00:19:46.893 clat percentiles (usec): 00:19:46.893 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 186], 00:19:46.893 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 202], 00:19:46.893 | 70.00th=[ 208], 80.00th=[ 217], 90.00th=[ 229], 95.00th=[ 239], 00:19:46.893 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 314], 99.95th=[ 334], 00:19:46.893 | 99.99th=[ 570] 00:19:46.893 write: IOPS=816, BW=3266KiB/s (3344kB/s)(191MiB/60000msec); 0 zone resets 00:19:46.893 slat (usec): min=17, max=7247, avg=20.28, stdev=41.23 00:19:46.893 clat (usec): min=121, max=40524k, avg=988.49, stdev=183093.86 00:19:46.893 lat (usec): min=139, max=40524k, avg=1008.77, stdev=183093.85 00:19:46.893 clat percentiles (usec): 00:19:46.893 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:19:46.893 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 159], 00:19:46.893 | 70.00th=[ 165], 80.00th=[ 172], 90.00th=[ 182], 95.00th=[ 194], 00:19:46.893 | 99.00th=[ 217], 99.50th=[ 231], 99.90th=[ 289], 99.95th=[ 396], 00:19:46.893 | 99.99th=[ 2245] 00:19:46.893 bw ( KiB/s): min= 4320, max=11704, per=100.00%, avg=9802.77, stdev=1438.83, samples=39 00:19:46.893 iops : min= 1080, max= 2926, avg=2450.69, stdev=359.71, samples=39 00:19:46.893 lat (usec) : 250=98.67%, 500=1.31%, 750=0.01%, 1000=0.01% 00:19:46.893 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 50=0.01%, >=2000=0.01% 00:19:46.893 cpu : usr=0.50%, sys=2.01%, ctx=97660, majf=0, minf=5 00:19:46.893 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:46.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.893 issued rwts: total=48640,48986,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.893 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:46.893 00:19:46.893 Run status group 0 (all jobs): 00:19:46.893 READ: bw=3243KiB/s (3320kB/s), 3243KiB/s-3243KiB/s (3320kB/s-3320kB/s), io=190MiB (199MB), run=60000-60000msec 00:19:46.893 WRITE: bw=3266KiB/s (3344kB/s), 3266KiB/s-3266KiB/s (3344kB/s-3344kB/s), io=191MiB (201MB), run=60000-60000msec 00:19:46.893 00:19:46.893 Disk stats (read/write): 00:19:46.893 nvme0n1: ios=48727/48640, merge=0/0, ticks=10257/8435, in_queue=18692, util=99.71% 00:19:46.893 07:25:46 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:46.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:46.893 07:25:46 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:46.893 07:25:46 -- common/autotest_common.sh@1198 -- # local i=0 00:19:46.893 07:25:46 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:46.893 07:25:46 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:46.893 07:25:46 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:46.893 07:25:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:46.893 nvmf hotplug test: fio successful as expected 00:19:46.893 07:25:46 -- common/autotest_common.sh@1210 -- # return 0 00:19:46.893 07:25:46 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:19:46.893 07:25:46 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:19:46.893 07:25:46 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:46.893 07:25:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.893 07:25:46 -- common/autotest_common.sh@10 -- # set +x 00:19:46.893 07:25:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.893 07:25:46 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:19:46.893 07:25:46 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:19:46.893 07:25:46 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:19:46.893 07:25:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:46.893 07:25:46 -- nvmf/common.sh@116 -- # sync 00:19:46.893 07:25:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:46.893 07:25:46 -- nvmf/common.sh@119 -- # set +e 00:19:46.893 07:25:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:46.893 07:25:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:46.893 rmmod nvme_tcp 00:19:46.893 rmmod nvme_fabrics 00:19:46.893 rmmod nvme_keyring 00:19:46.893 07:25:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:46.893 07:25:46 -- nvmf/common.sh@123 -- # set -e 00:19:46.893 07:25:46 -- nvmf/common.sh@124 -- # return 0 00:19:46.893 07:25:46 -- nvmf/common.sh@477 -- # '[' -n 91449 ']' 00:19:46.893 07:25:46 -- nvmf/common.sh@478 -- # killprocess 91449 00:19:46.893 07:25:46 -- common/autotest_common.sh@926 -- # '[' -z 91449 ']' 00:19:46.893 07:25:46 -- common/autotest_common.sh@930 -- # kill -0 91449 00:19:46.893 07:25:46 -- common/autotest_common.sh@931 -- # uname 00:19:46.893 07:25:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:46.893 07:25:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 91449 00:19:46.893 killing process with pid 91449 00:19:46.893 07:25:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:46.893 07:25:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:46.893 07:25:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 91449' 00:19:46.893 07:25:46 -- common/autotest_common.sh@945 -- # kill 91449 00:19:46.893 07:25:46 -- common/autotest_common.sh@950 -- # wait 91449 00:19:46.893 07:25:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:46.893 07:25:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:46.893 07:25:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:46.893 07:25:47 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:46.893 07:25:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:46.893 07:25:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:46.893 07:25:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:46.893 07:25:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:46.893 07:25:47 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:46.893 00:19:46.893 real 1m4.684s 00:19:46.893 user 4m7.482s 00:19:46.893 sys 0m8.026s 00:19:46.893 07:25:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:46.893 07:25:47 -- common/autotest_common.sh@10 -- # set +x 00:19:46.893 ************************************ 00:19:46.893 END TEST nvmf_initiator_timeout 00:19:46.893 ************************************ 00:19:46.893 07:25:47 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:19:46.893 07:25:47 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:19:46.893 07:25:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:46.893 07:25:47 -- common/autotest_common.sh@10 -- # set +x 00:19:46.893 07:25:47 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:19:46.893 07:25:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:46.893 07:25:47 -- common/autotest_common.sh@10 -- # set +x 00:19:46.893 07:25:47 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:19:46.893 07:25:47 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:46.893 07:25:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:46.893 07:25:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:46.893 07:25:47 -- common/autotest_common.sh@10 -- # set +x 00:19:46.893 ************************************ 00:19:46.893 START TEST nvmf_multicontroller 00:19:46.893 ************************************ 00:19:46.893 07:25:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:46.893 * Looking for test storage... 00:19:46.893 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:46.893 07:25:47 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:46.893 07:25:47 -- nvmf/common.sh@7 -- # uname -s 00:19:46.893 07:25:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:46.893 07:25:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:46.893 07:25:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:46.893 07:25:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:46.893 07:25:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:46.893 07:25:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:46.893 07:25:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:46.893 07:25:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:46.893 07:25:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:46.893 07:25:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:46.893 07:25:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:19:46.893 07:25:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:19:46.893 07:25:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:46.893 07:25:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:46.893 07:25:47 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:46.893 07:25:47 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:46.893 07:25:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:46.893 07:25:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:46.893 07:25:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:46.893 07:25:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.893 07:25:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.894 07:25:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.894 07:25:47 -- paths/export.sh@5 -- # export PATH 00:19:46.894 07:25:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.894 07:25:47 -- nvmf/common.sh@46 -- # : 0 00:19:46.894 07:25:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:46.894 07:25:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:46.894 07:25:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:46.894 07:25:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:46.894 07:25:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:46.894 07:25:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:46.894 07:25:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:46.894 07:25:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:46.894 07:25:47 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:46.894 07:25:47 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:46.894 07:25:47 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:19:46.894 07:25:47 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:19:46.894 07:25:47 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:46.894 07:25:47 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:19:46.894 07:25:47 -- host/multicontroller.sh@23 -- # nvmftestinit 00:19:46.894 07:25:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:46.894 07:25:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:46.894 07:25:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:46.894 07:25:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:46.894 07:25:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:46.894 07:25:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:46.894 07:25:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:46.894 07:25:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:46.894 07:25:47 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:46.894 07:25:47 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:46.894 07:25:47 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:46.894 07:25:47 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:46.894 07:25:47 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:46.894 07:25:47 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:46.894 07:25:47 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:46.894 07:25:47 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:46.894 07:25:47 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:46.894 07:25:47 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:46.894 07:25:47 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:46.894 07:25:47 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:46.894 07:25:47 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:46.894 07:25:47 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:46.894 07:25:47 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:46.894 07:25:47 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:46.894 07:25:47 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:46.894 07:25:47 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:46.894 07:25:47 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:46.894 07:25:47 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:46.894 Cannot find device "nvmf_tgt_br" 00:19:46.894 07:25:47 -- nvmf/common.sh@154 -- # true 00:19:46.894 07:25:47 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:46.894 Cannot find device "nvmf_tgt_br2" 00:19:46.894 07:25:47 -- nvmf/common.sh@155 -- # true 00:19:46.894 07:25:47 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:46.894 07:25:47 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:46.894 Cannot find device "nvmf_tgt_br" 00:19:46.894 07:25:47 -- nvmf/common.sh@157 -- # true 00:19:46.894 07:25:47 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:46.894 Cannot find device "nvmf_tgt_br2" 00:19:46.894 07:25:47 -- nvmf/common.sh@158 -- # true 00:19:46.894 07:25:47 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:46.894 07:25:47 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:46.894 07:25:47 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:46.894 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:46.894 07:25:47 -- nvmf/common.sh@161 -- # true 00:19:46.894 07:25:47 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:46.894 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:46.894 07:25:47 -- nvmf/common.sh@162 -- # true 00:19:46.894 07:25:47 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:46.894 07:25:47 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:46.894 07:25:47 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:46.894 07:25:47 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:46.894 07:25:47 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:46.894 07:25:47 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:46.894 07:25:47 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:46.894 07:25:47 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:46.894 07:25:47 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:46.894 07:25:47 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:46.894 07:25:47 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:46.894 07:25:47 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:46.894 07:25:47 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:46.894 07:25:47 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:46.894 07:25:47 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:46.894 07:25:47 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:46.894 07:25:47 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:46.894 07:25:47 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:46.894 07:25:47 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:46.894 07:25:47 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:46.894 07:25:47 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:46.894 07:25:47 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:46.894 07:25:47 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:46.894 07:25:47 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:46.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:46.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:19:46.894 00:19:46.894 --- 10.0.0.2 ping statistics --- 00:19:46.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.894 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:19:46.894 07:25:47 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:46.894 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:46.894 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 00:19:46.894 00:19:46.894 --- 10.0.0.3 ping statistics --- 00:19:46.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.894 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:19:46.894 07:25:47 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:46.894 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:46.894 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:19:46.894 00:19:46.894 --- 10.0.0.1 ping statistics --- 00:19:46.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.894 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:19:46.894 07:25:47 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:46.894 07:25:47 -- nvmf/common.sh@421 -- # return 0 00:19:46.894 07:25:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:46.894 07:25:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:46.894 07:25:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:46.894 07:25:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:46.894 07:25:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:46.894 07:25:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:46.894 07:25:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:46.894 07:25:47 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:19:46.894 07:25:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:46.894 07:25:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:46.894 07:25:47 -- common/autotest_common.sh@10 -- # set +x 00:19:46.894 07:25:47 -- nvmf/common.sh@469 -- # nvmfpid=92395 00:19:46.894 07:25:47 -- nvmf/common.sh@470 -- # waitforlisten 92395 00:19:46.894 07:25:47 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:46.894 07:25:47 -- common/autotest_common.sh@819 -- # '[' -z 92395 ']' 00:19:46.894 07:25:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.894 07:25:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:46.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.894 07:25:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.894 07:25:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:46.894 07:25:47 -- common/autotest_common.sh@10 -- # set +x 00:19:46.894 [2024-11-04 07:25:47.851824] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:46.894 [2024-11-04 07:25:47.851931] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.894 [2024-11-04 07:25:47.989456] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:46.894 [2024-11-04 07:25:48.062271] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:46.894 [2024-11-04 07:25:48.062683] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.894 [2024-11-04 07:25:48.063092] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.894 [2024-11-04 07:25:48.063206] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.894 [2024-11-04 07:25:48.063438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:46.894 [2024-11-04 07:25:48.063645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.894 [2024-11-04 07:25:48.063792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:46.894 07:25:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:46.894 07:25:48 -- common/autotest_common.sh@852 -- # return 0 00:19:46.895 07:25:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:46.895 07:25:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:46.895 07:25:48 -- common/autotest_common.sh@10 -- # set +x 00:19:47.154 07:25:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:47.154 07:25:48 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:47.154 07:25:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:47.154 07:25:48 -- common/autotest_common.sh@10 -- # set +x 00:19:47.154 [2024-11-04 07:25:48.768465] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:47.154 07:25:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:47.154 07:25:48 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:47.154 07:25:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:47.154 07:25:48 -- common/autotest_common.sh@10 -- # set +x 00:19:47.154 Malloc0 00:19:47.154 07:25:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:47.154 07:25:48 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:47.154 07:25:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:47.154 07:25:48 -- common/autotest_common.sh@10 -- # set +x 00:19:47.154 07:25:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:47.154 07:25:48 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:47.154 07:25:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:47.154 07:25:48 -- common/autotest_common.sh@10 -- # set +x 00:19:47.154 07:25:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:47.154 07:25:48 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:47.154 07:25:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:47.154 07:25:48 -- common/autotest_common.sh@10 -- # set +x 00:19:47.154 [2024-11-04 07:25:48.840462] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:47.154 07:25:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:47.154 07:25:48 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:47.154 07:25:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:47.154 07:25:48 -- common/autotest_common.sh@10 -- # set +x 00:19:47.154 [2024-11-04 07:25:48.848340] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:47.154 07:25:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:47.154 07:25:48 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:47.154 07:25:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:47.154 07:25:48 -- common/autotest_common.sh@10 -- # set +x 00:19:47.154 Malloc1 00:19:47.154 07:25:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:47.154 07:25:48 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:19:47.154 07:25:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:47.154 07:25:48 -- common/autotest_common.sh@10 -- # set +x 00:19:47.154 07:25:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:47.154 07:25:48 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:19:47.154 07:25:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:47.154 07:25:48 -- common/autotest_common.sh@10 -- # set +x 00:19:47.154 07:25:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:47.154 07:25:48 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:47.154 07:25:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:47.154 07:25:48 -- common/autotest_common.sh@10 -- # set +x 00:19:47.154 07:25:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:47.154 07:25:48 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:19:47.154 07:25:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:47.154 07:25:48 -- common/autotest_common.sh@10 -- # set +x 00:19:47.154 07:25:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:47.154 07:25:48 -- host/multicontroller.sh@44 -- # bdevperf_pid=92447 00:19:47.154 07:25:48 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:19:47.154 07:25:48 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:47.154 07:25:48 -- host/multicontroller.sh@47 -- # waitforlisten 92447 /var/tmp/bdevperf.sock 00:19:47.154 07:25:48 -- common/autotest_common.sh@819 -- # '[' -z 92447 ']' 00:19:47.154 07:25:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:47.154 07:25:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:47.154 07:25:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:47.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:47.154 07:25:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:47.154 07:25:48 -- common/autotest_common.sh@10 -- # set +x 00:19:48.531 07:25:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:48.531 07:25:49 -- common/autotest_common.sh@852 -- # return 0 00:19:48.531 07:25:49 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:48.531 07:25:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:48.531 07:25:49 -- common/autotest_common.sh@10 -- # set +x 00:19:48.531 NVMe0n1 00:19:48.531 07:25:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:48.531 07:25:50 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:48.531 07:25:50 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:19:48.531 07:25:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:48.531 07:25:50 -- common/autotest_common.sh@10 -- # set +x 00:19:48.531 07:25:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:48.531 1 00:19:48.531 07:25:50 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:48.531 07:25:50 -- common/autotest_common.sh@640 -- # local es=0 00:19:48.531 07:25:50 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:48.531 07:25:50 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:19:48.531 07:25:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:48.531 07:25:50 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:19:48.531 07:25:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:48.531 07:25:50 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:48.531 07:25:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:48.531 07:25:50 -- common/autotest_common.sh@10 -- # set +x 00:19:48.531 2024/11/04 07:25:50 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:48.531 request: 00:19:48.531 { 00:19:48.531 "method": "bdev_nvme_attach_controller", 00:19:48.531 "params": { 00:19:48.531 "name": "NVMe0", 00:19:48.531 "trtype": "tcp", 00:19:48.531 "traddr": "10.0.0.2", 00:19:48.531 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:19:48.531 "hostaddr": "10.0.0.2", 00:19:48.531 "hostsvcid": "60000", 00:19:48.531 "adrfam": "ipv4", 00:19:48.531 "trsvcid": "4420", 00:19:48.531 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:19:48.531 } 00:19:48.531 } 00:19:48.531 Got JSON-RPC error response 00:19:48.531 GoRPCClient: error on JSON-RPC call 00:19:48.531 07:25:50 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:19:48.531 07:25:50 -- common/autotest_common.sh@643 -- # es=1 00:19:48.531 07:25:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:48.531 07:25:50 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:48.531 07:25:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:48.531 07:25:50 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:48.531 07:25:50 -- common/autotest_common.sh@640 -- # local es=0 00:19:48.531 07:25:50 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:48.531 07:25:50 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:19:48.531 07:25:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:48.531 07:25:50 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:19:48.531 07:25:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:48.531 07:25:50 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:48.531 07:25:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:48.531 07:25:50 -- common/autotest_common.sh@10 -- # set +x 00:19:48.531 2024/11/04 07:25:50 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:48.531 request: 00:19:48.531 { 00:19:48.531 "method": "bdev_nvme_attach_controller", 00:19:48.531 "params": { 00:19:48.531 "name": "NVMe0", 00:19:48.531 "trtype": "tcp", 00:19:48.531 "traddr": "10.0.0.2", 00:19:48.531 "hostaddr": "10.0.0.2", 00:19:48.531 "hostsvcid": "60000", 00:19:48.531 "adrfam": "ipv4", 00:19:48.531 "trsvcid": "4420", 00:19:48.531 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:19:48.531 } 00:19:48.531 } 00:19:48.531 Got JSON-RPC error response 00:19:48.531 GoRPCClient: error on JSON-RPC call 00:19:48.531 07:25:50 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:19:48.531 07:25:50 -- common/autotest_common.sh@643 -- # es=1 00:19:48.531 07:25:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:48.531 07:25:50 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:48.531 07:25:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:48.531 07:25:50 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:48.531 07:25:50 -- common/autotest_common.sh@640 -- # local es=0 00:19:48.531 07:25:50 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:48.531 07:25:50 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:19:48.531 07:25:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:48.531 07:25:50 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:19:48.531 07:25:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:48.531 07:25:50 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:48.531 07:25:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:48.531 07:25:50 -- common/autotest_common.sh@10 -- # set +x 00:19:48.531 2024/11/04 07:25:50 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:19:48.531 request: 00:19:48.531 { 00:19:48.531 "method": "bdev_nvme_attach_controller", 00:19:48.531 "params": { 00:19:48.531 "name": "NVMe0", 00:19:48.531 "trtype": "tcp", 00:19:48.531 "traddr": "10.0.0.2", 00:19:48.531 "hostaddr": "10.0.0.2", 00:19:48.531 "hostsvcid": "60000", 00:19:48.531 "adrfam": "ipv4", 00:19:48.531 "trsvcid": "4420", 00:19:48.531 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.531 "multipath": "disable" 00:19:48.531 } 00:19:48.531 } 00:19:48.531 Got JSON-RPC error response 00:19:48.531 GoRPCClient: error on JSON-RPC call 00:19:48.531 07:25:50 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:19:48.531 07:25:50 -- common/autotest_common.sh@643 -- # es=1 00:19:48.531 07:25:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:48.531 07:25:50 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:48.531 07:25:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:48.531 07:25:50 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:48.531 07:25:50 -- common/autotest_common.sh@640 -- # local es=0 00:19:48.531 07:25:50 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:48.531 07:25:50 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:19:48.531 07:25:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:48.531 07:25:50 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:19:48.531 07:25:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:48.531 07:25:50 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:48.531 07:25:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:48.531 07:25:50 -- common/autotest_common.sh@10 -- # set +x 00:19:48.531 2024/11/04 07:25:50 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:48.531 request: 00:19:48.531 { 00:19:48.531 "method": "bdev_nvme_attach_controller", 00:19:48.531 "params": { 00:19:48.531 "name": "NVMe0", 00:19:48.531 "trtype": "tcp", 00:19:48.531 "traddr": "10.0.0.2", 00:19:48.531 "hostaddr": "10.0.0.2", 00:19:48.531 "hostsvcid": "60000", 00:19:48.531 "adrfam": "ipv4", 00:19:48.532 "trsvcid": "4420", 00:19:48.532 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.532 "multipath": "failover" 00:19:48.532 } 00:19:48.532 } 00:19:48.532 Got JSON-RPC error response 00:19:48.532 GoRPCClient: error on JSON-RPC call 00:19:48.532 07:25:50 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:19:48.532 07:25:50 -- common/autotest_common.sh@643 -- # es=1 00:19:48.532 07:25:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:48.532 07:25:50 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:48.532 07:25:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:48.532 07:25:50 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:48.532 07:25:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:48.532 07:25:50 -- common/autotest_common.sh@10 -- # set +x 00:19:48.532 00:19:48.532 07:25:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:48.532 07:25:50 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:48.532 07:25:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:48.532 07:25:50 -- common/autotest_common.sh@10 -- # set +x 00:19:48.532 07:25:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:48.532 07:25:50 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:48.532 07:25:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:48.532 07:25:50 -- common/autotest_common.sh@10 -- # set +x 00:19:48.532 00:19:48.532 07:25:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:48.532 07:25:50 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:48.532 07:25:50 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:19:48.532 07:25:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:48.532 07:25:50 -- common/autotest_common.sh@10 -- # set +x 00:19:48.532 07:25:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:48.532 07:25:50 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:19:48.532 07:25:50 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:49.907 0 00:19:49.907 07:25:51 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:19:49.907 07:25:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:49.907 07:25:51 -- common/autotest_common.sh@10 -- # set +x 00:19:49.907 07:25:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:49.907 07:25:51 -- host/multicontroller.sh@100 -- # killprocess 92447 00:19:49.907 07:25:51 -- common/autotest_common.sh@926 -- # '[' -z 92447 ']' 00:19:49.907 07:25:51 -- common/autotest_common.sh@930 -- # kill -0 92447 00:19:49.907 07:25:51 -- common/autotest_common.sh@931 -- # uname 00:19:49.907 07:25:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:49.907 07:25:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 92447 00:19:49.907 killing process with pid 92447 00:19:49.907 07:25:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:49.907 07:25:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:49.907 07:25:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 92447' 00:19:49.907 07:25:51 -- common/autotest_common.sh@945 -- # kill 92447 00:19:49.907 07:25:51 -- common/autotest_common.sh@950 -- # wait 92447 00:19:49.907 07:25:51 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:49.907 07:25:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:49.907 07:25:51 -- common/autotest_common.sh@10 -- # set +x 00:19:49.907 07:25:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:49.907 07:25:51 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:49.907 07:25:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:49.907 07:25:51 -- common/autotest_common.sh@10 -- # set +x 00:19:49.907 07:25:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:49.907 07:25:51 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:19:49.907 07:25:51 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:49.907 07:25:51 -- common/autotest_common.sh@1597 -- # read -r file 00:19:49.907 07:25:51 -- common/autotest_common.sh@1596 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:19:49.907 07:25:51 -- common/autotest_common.sh@1596 -- # sort -u 00:19:49.907 07:25:51 -- common/autotest_common.sh@1598 -- # cat 00:19:49.907 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:19:49.907 [2024-11-04 07:25:48.975858] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:49.907 [2024-11-04 07:25:48.976537] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92447 ] 00:19:49.907 [2024-11-04 07:25:49.117419] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.907 [2024-11-04 07:25:49.192127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.907 [2024-11-04 07:25:50.252554] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name a9f7850e-f649-46e3-98f5-7a37bd9b4f2a already exists 00:19:49.907 [2024-11-04 07:25:50.252604] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:a9f7850e-f649-46e3-98f5-7a37bd9b4f2a alias for bdev NVMe1n1 00:19:49.907 [2024-11-04 07:25:50.252640] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:19:49.907 Running I/O for 1 seconds... 00:19:49.907 00:19:49.907 Latency(us) 00:19:49.907 [2024-11-04T07:25:51.748Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.907 [2024-11-04T07:25:51.748Z] Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:19:49.907 NVMe0n1 : 1.00 23561.42 92.04 0.00 0.00 5419.45 3068.28 10962.39 00:19:49.907 [2024-11-04T07:25:51.748Z] =================================================================================================================== 00:19:49.907 [2024-11-04T07:25:51.748Z] Total : 23561.42 92.04 0.00 0.00 5419.45 3068.28 10962.39 00:19:49.907 Received shutdown signal, test time was about 1.000000 seconds 00:19:49.907 00:19:49.907 Latency(us) 00:19:49.907 [2024-11-04T07:25:51.748Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.907 [2024-11-04T07:25:51.748Z] =================================================================================================================== 00:19:49.907 [2024-11-04T07:25:51.748Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:49.907 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:19:49.908 07:25:51 -- common/autotest_common.sh@1603 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:49.908 07:25:51 -- common/autotest_common.sh@1597 -- # read -r file 00:19:49.908 07:25:51 -- host/multicontroller.sh@108 -- # nvmftestfini 00:19:49.908 07:25:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:49.908 07:25:51 -- nvmf/common.sh@116 -- # sync 00:19:50.166 07:25:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:50.166 07:25:51 -- nvmf/common.sh@119 -- # set +e 00:19:50.166 07:25:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:50.166 07:25:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:50.166 rmmod nvme_tcp 00:19:50.166 rmmod nvme_fabrics 00:19:50.166 rmmod nvme_keyring 00:19:50.166 07:25:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:50.166 07:25:51 -- nvmf/common.sh@123 -- # set -e 00:19:50.166 07:25:51 -- nvmf/common.sh@124 -- # return 0 00:19:50.166 07:25:51 -- nvmf/common.sh@477 -- # '[' -n 92395 ']' 00:19:50.166 07:25:51 -- nvmf/common.sh@478 -- # killprocess 92395 00:19:50.166 07:25:51 -- common/autotest_common.sh@926 -- # '[' -z 92395 ']' 00:19:50.166 07:25:51 -- common/autotest_common.sh@930 -- # kill -0 92395 00:19:50.166 07:25:51 -- common/autotest_common.sh@931 -- # uname 00:19:50.166 07:25:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:50.166 07:25:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 92395 00:19:50.166 killing process with pid 92395 00:19:50.166 07:25:51 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:50.166 07:25:51 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:50.166 07:25:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 92395' 00:19:50.166 07:25:51 -- common/autotest_common.sh@945 -- # kill 92395 00:19:50.166 07:25:51 -- common/autotest_common.sh@950 -- # wait 92395 00:19:50.425 07:25:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:50.425 07:25:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:50.425 07:25:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:50.425 07:25:52 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:50.425 07:25:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:50.425 07:25:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.425 07:25:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:50.425 07:25:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.425 07:25:52 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:50.425 00:19:50.425 real 0m4.867s 00:19:50.425 user 0m15.135s 00:19:50.425 sys 0m1.149s 00:19:50.425 07:25:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:50.425 ************************************ 00:19:50.425 END TEST nvmf_multicontroller 00:19:50.425 ************************************ 00:19:50.425 07:25:52 -- common/autotest_common.sh@10 -- # set +x 00:19:50.425 07:25:52 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:50.425 07:25:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:50.425 07:25:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:50.425 07:25:52 -- common/autotest_common.sh@10 -- # set +x 00:19:50.425 ************************************ 00:19:50.425 START TEST nvmf_aer 00:19:50.425 ************************************ 00:19:50.425 07:25:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:50.684 * Looking for test storage... 00:19:50.684 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:50.684 07:25:52 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:50.684 07:25:52 -- nvmf/common.sh@7 -- # uname -s 00:19:50.684 07:25:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:50.684 07:25:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:50.684 07:25:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:50.684 07:25:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:50.684 07:25:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:50.684 07:25:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:50.684 07:25:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:50.684 07:25:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:50.684 07:25:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:50.684 07:25:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:50.684 07:25:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:19:50.684 07:25:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:19:50.684 07:25:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:50.684 07:25:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:50.684 07:25:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:50.684 07:25:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:50.684 07:25:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:50.684 07:25:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:50.684 07:25:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:50.684 07:25:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.684 07:25:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.684 07:25:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.684 07:25:52 -- paths/export.sh@5 -- # export PATH 00:19:50.684 07:25:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.684 07:25:52 -- nvmf/common.sh@46 -- # : 0 00:19:50.684 07:25:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:50.684 07:25:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:50.684 07:25:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:50.684 07:25:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:50.684 07:25:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:50.684 07:25:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:50.684 07:25:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:50.684 07:25:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:50.684 07:25:52 -- host/aer.sh@11 -- # nvmftestinit 00:19:50.684 07:25:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:50.684 07:25:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:50.684 07:25:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:50.684 07:25:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:50.684 07:25:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:50.684 07:25:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.684 07:25:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:50.684 07:25:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.684 07:25:52 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:50.684 07:25:52 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:50.684 07:25:52 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:50.684 07:25:52 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:50.684 07:25:52 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:50.684 07:25:52 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:50.684 07:25:52 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:50.684 07:25:52 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:50.684 07:25:52 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:50.684 07:25:52 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:50.684 07:25:52 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:50.684 07:25:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:50.684 07:25:52 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:50.684 07:25:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:50.684 07:25:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:50.684 07:25:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:50.684 07:25:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:50.684 07:25:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:50.684 07:25:52 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:50.684 07:25:52 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:50.684 Cannot find device "nvmf_tgt_br" 00:19:50.684 07:25:52 -- nvmf/common.sh@154 -- # true 00:19:50.684 07:25:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:50.684 Cannot find device "nvmf_tgt_br2" 00:19:50.684 07:25:52 -- nvmf/common.sh@155 -- # true 00:19:50.684 07:25:52 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:50.684 07:25:52 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:50.684 Cannot find device "nvmf_tgt_br" 00:19:50.684 07:25:52 -- nvmf/common.sh@157 -- # true 00:19:50.684 07:25:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:50.684 Cannot find device "nvmf_tgt_br2" 00:19:50.684 07:25:52 -- nvmf/common.sh@158 -- # true 00:19:50.684 07:25:52 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:50.684 07:25:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:50.684 07:25:52 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:50.684 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:50.684 07:25:52 -- nvmf/common.sh@161 -- # true 00:19:50.684 07:25:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:50.684 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:50.684 07:25:52 -- nvmf/common.sh@162 -- # true 00:19:50.684 07:25:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:50.684 07:25:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:50.684 07:25:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:50.684 07:25:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:50.943 07:25:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:50.943 07:25:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:50.943 07:25:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:50.943 07:25:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:50.943 07:25:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:50.943 07:25:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:50.943 07:25:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:50.943 07:25:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:50.943 07:25:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:50.943 07:25:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:50.943 07:25:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:50.943 07:25:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:50.943 07:25:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:50.943 07:25:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:50.943 07:25:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:50.943 07:25:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:50.943 07:25:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:50.943 07:25:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:50.943 07:25:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:50.943 07:25:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:50.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:50.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:19:50.943 00:19:50.943 --- 10.0.0.2 ping statistics --- 00:19:50.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.943 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:19:50.943 07:25:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:50.943 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:50.943 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:19:50.943 00:19:50.943 --- 10.0.0.3 ping statistics --- 00:19:50.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.943 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:19:50.943 07:25:52 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:50.943 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:50.943 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:19:50.943 00:19:50.943 --- 10.0.0.1 ping statistics --- 00:19:50.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.943 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:19:50.943 07:25:52 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:50.943 07:25:52 -- nvmf/common.sh@421 -- # return 0 00:19:50.943 07:25:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:50.943 07:25:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:50.943 07:25:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:50.943 07:25:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:50.943 07:25:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:50.943 07:25:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:50.943 07:25:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:50.943 07:25:52 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:19:50.943 07:25:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:50.943 07:25:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:50.943 07:25:52 -- common/autotest_common.sh@10 -- # set +x 00:19:50.943 07:25:52 -- nvmf/common.sh@469 -- # nvmfpid=92703 00:19:50.943 07:25:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:50.943 07:25:52 -- nvmf/common.sh@470 -- # waitforlisten 92703 00:19:50.943 07:25:52 -- common/autotest_common.sh@819 -- # '[' -z 92703 ']' 00:19:50.943 07:25:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.943 07:25:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:50.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.943 07:25:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.944 07:25:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:50.944 07:25:52 -- common/autotest_common.sh@10 -- # set +x 00:19:50.944 [2024-11-04 07:25:52.780315] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:50.944 [2024-11-04 07:25:52.780375] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:51.203 [2024-11-04 07:25:52.915281] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:51.203 [2024-11-04 07:25:52.976537] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:51.203 [2024-11-04 07:25:52.976692] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:51.203 [2024-11-04 07:25:52.976704] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:51.203 [2024-11-04 07:25:52.976711] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:51.203 [2024-11-04 07:25:52.976843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.203 [2024-11-04 07:25:52.976982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:51.203 [2024-11-04 07:25:52.977090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:51.203 [2024-11-04 07:25:52.977099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.139 07:25:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:52.139 07:25:53 -- common/autotest_common.sh@852 -- # return 0 00:19:52.139 07:25:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:52.139 07:25:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:52.139 07:25:53 -- common/autotest_common.sh@10 -- # set +x 00:19:52.139 07:25:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:52.139 07:25:53 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:52.139 07:25:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:52.139 07:25:53 -- common/autotest_common.sh@10 -- # set +x 00:19:52.139 [2024-11-04 07:25:53.879154] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:52.139 07:25:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:52.139 07:25:53 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:19:52.139 07:25:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:52.139 07:25:53 -- common/autotest_common.sh@10 -- # set +x 00:19:52.139 Malloc0 00:19:52.139 07:25:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:52.139 07:25:53 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:19:52.139 07:25:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:52.139 07:25:53 -- common/autotest_common.sh@10 -- # set +x 00:19:52.139 07:25:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:52.139 07:25:53 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:52.139 07:25:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:52.139 07:25:53 -- common/autotest_common.sh@10 -- # set +x 00:19:52.139 07:25:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:52.139 07:25:53 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:52.139 07:25:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:52.139 07:25:53 -- common/autotest_common.sh@10 -- # set +x 00:19:52.139 [2024-11-04 07:25:53.937261] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:52.139 07:25:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:52.139 07:25:53 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:19:52.139 07:25:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:52.139 07:25:53 -- common/autotest_common.sh@10 -- # set +x 00:19:52.139 [2024-11-04 07:25:53.945012] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:19:52.139 [ 00:19:52.139 { 00:19:52.139 "allow_any_host": true, 00:19:52.139 "hosts": [], 00:19:52.139 "listen_addresses": [], 00:19:52.139 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:52.139 "subtype": "Discovery" 00:19:52.139 }, 00:19:52.139 { 00:19:52.139 "allow_any_host": true, 00:19:52.139 "hosts": [], 00:19:52.139 "listen_addresses": [ 00:19:52.139 { 00:19:52.139 "adrfam": "IPv4", 00:19:52.139 "traddr": "10.0.0.2", 00:19:52.139 "transport": "TCP", 00:19:52.139 "trsvcid": "4420", 00:19:52.139 "trtype": "TCP" 00:19:52.139 } 00:19:52.139 ], 00:19:52.139 "max_cntlid": 65519, 00:19:52.139 "max_namespaces": 2, 00:19:52.139 "min_cntlid": 1, 00:19:52.139 "model_number": "SPDK bdev Controller", 00:19:52.139 "namespaces": [ 00:19:52.139 { 00:19:52.139 "bdev_name": "Malloc0", 00:19:52.139 "name": "Malloc0", 00:19:52.139 "nguid": "6738B65CA5D14A50B59C0DC09A5FEAE2", 00:19:52.139 "nsid": 1, 00:19:52.139 "uuid": "6738b65c-a5d1-4a50-b59c-0dc09a5feae2" 00:19:52.139 } 00:19:52.139 ], 00:19:52.139 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:52.139 "serial_number": "SPDK00000000000001", 00:19:52.139 "subtype": "NVMe" 00:19:52.139 } 00:19:52.139 ] 00:19:52.139 07:25:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:52.139 07:25:53 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:52.139 07:25:53 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:19:52.139 07:25:53 -- host/aer.sh@33 -- # aerpid=92757 00:19:52.139 07:25:53 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:19:52.139 07:25:53 -- common/autotest_common.sh@1244 -- # local i=0 00:19:52.139 07:25:53 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:19:52.139 07:25:53 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:52.139 07:25:53 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:19:52.139 07:25:53 -- common/autotest_common.sh@1247 -- # i=1 00:19:52.139 07:25:53 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:19:52.398 07:25:54 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:52.398 07:25:54 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:19:52.398 07:25:54 -- common/autotest_common.sh@1247 -- # i=2 00:19:52.398 07:25:54 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:19:52.398 07:25:54 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:52.398 07:25:54 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:52.398 07:25:54 -- common/autotest_common.sh@1255 -- # return 0 00:19:52.398 07:25:54 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:19:52.398 07:25:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:52.398 07:25:54 -- common/autotest_common.sh@10 -- # set +x 00:19:52.398 Malloc1 00:19:52.398 07:25:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:52.398 07:25:54 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:19:52.398 07:25:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:52.398 07:25:54 -- common/autotest_common.sh@10 -- # set +x 00:19:52.398 07:25:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:52.398 07:25:54 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:19:52.398 07:25:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:52.398 07:25:54 -- common/autotest_common.sh@10 -- # set +x 00:19:52.656 Asynchronous Event Request test 00:19:52.656 Attaching to 10.0.0.2 00:19:52.656 Attached to 10.0.0.2 00:19:52.656 Registering asynchronous event callbacks... 00:19:52.656 Starting namespace attribute notice tests for all controllers... 00:19:52.656 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:52.656 aer_cb - Changed Namespace 00:19:52.656 Cleaning up... 00:19:52.656 [ 00:19:52.656 { 00:19:52.656 "allow_any_host": true, 00:19:52.656 "hosts": [], 00:19:52.656 "listen_addresses": [], 00:19:52.656 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:52.656 "subtype": "Discovery" 00:19:52.656 }, 00:19:52.656 { 00:19:52.656 "allow_any_host": true, 00:19:52.656 "hosts": [], 00:19:52.656 "listen_addresses": [ 00:19:52.656 { 00:19:52.656 "adrfam": "IPv4", 00:19:52.656 "traddr": "10.0.0.2", 00:19:52.656 "transport": "TCP", 00:19:52.656 "trsvcid": "4420", 00:19:52.656 "trtype": "TCP" 00:19:52.656 } 00:19:52.656 ], 00:19:52.656 "max_cntlid": 65519, 00:19:52.656 "max_namespaces": 2, 00:19:52.656 "min_cntlid": 1, 00:19:52.656 "model_number": "SPDK bdev Controller", 00:19:52.656 "namespaces": [ 00:19:52.656 { 00:19:52.656 "bdev_name": "Malloc0", 00:19:52.656 "name": "Malloc0", 00:19:52.656 "nguid": "6738B65CA5D14A50B59C0DC09A5FEAE2", 00:19:52.656 "nsid": 1, 00:19:52.656 "uuid": "6738b65c-a5d1-4a50-b59c-0dc09a5feae2" 00:19:52.656 }, 00:19:52.656 { 00:19:52.656 "bdev_name": "Malloc1", 00:19:52.656 "name": "Malloc1", 00:19:52.656 "nguid": "7B8A3437D1C64DAEA957B337B0AC0192", 00:19:52.656 "nsid": 2, 00:19:52.656 "uuid": "7b8a3437-d1c6-4dae-a957-b337b0ac0192" 00:19:52.656 } 00:19:52.656 ], 00:19:52.656 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:52.656 "serial_number": "SPDK00000000000001", 00:19:52.656 "subtype": "NVMe" 00:19:52.656 } 00:19:52.656 ] 00:19:52.656 07:25:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:52.656 07:25:54 -- host/aer.sh@43 -- # wait 92757 00:19:52.656 07:25:54 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:52.656 07:25:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:52.656 07:25:54 -- common/autotest_common.sh@10 -- # set +x 00:19:52.656 07:25:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:52.656 07:25:54 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:19:52.656 07:25:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:52.656 07:25:54 -- common/autotest_common.sh@10 -- # set +x 00:19:52.656 07:25:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:52.656 07:25:54 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:52.656 07:25:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:52.656 07:25:54 -- common/autotest_common.sh@10 -- # set +x 00:19:52.656 07:25:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:52.656 07:25:54 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:19:52.656 07:25:54 -- host/aer.sh@51 -- # nvmftestfini 00:19:52.656 07:25:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:52.656 07:25:54 -- nvmf/common.sh@116 -- # sync 00:19:52.656 07:25:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:52.656 07:25:54 -- nvmf/common.sh@119 -- # set +e 00:19:52.656 07:25:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:52.656 07:25:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:52.656 rmmod nvme_tcp 00:19:52.656 rmmod nvme_fabrics 00:19:52.656 rmmod nvme_keyring 00:19:52.656 07:25:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:52.656 07:25:54 -- nvmf/common.sh@123 -- # set -e 00:19:52.656 07:25:54 -- nvmf/common.sh@124 -- # return 0 00:19:52.657 07:25:54 -- nvmf/common.sh@477 -- # '[' -n 92703 ']' 00:19:52.657 07:25:54 -- nvmf/common.sh@478 -- # killprocess 92703 00:19:52.657 07:25:54 -- common/autotest_common.sh@926 -- # '[' -z 92703 ']' 00:19:52.657 07:25:54 -- common/autotest_common.sh@930 -- # kill -0 92703 00:19:52.657 07:25:54 -- common/autotest_common.sh@931 -- # uname 00:19:52.657 07:25:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:52.657 07:25:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 92703 00:19:52.914 killing process with pid 92703 00:19:52.914 07:25:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:52.914 07:25:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:52.914 07:25:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 92703' 00:19:52.914 07:25:54 -- common/autotest_common.sh@945 -- # kill 92703 00:19:52.914 [2024-11-04 07:25:54.504074] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:19:52.914 07:25:54 -- common/autotest_common.sh@950 -- # wait 92703 00:19:52.914 07:25:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:52.914 07:25:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:52.914 07:25:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:52.914 07:25:54 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:52.914 07:25:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:52.914 07:25:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.914 07:25:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:52.914 07:25:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.914 07:25:54 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:52.914 00:19:52.914 real 0m2.491s 00:19:52.914 user 0m7.141s 00:19:52.914 sys 0m0.686s 00:19:52.914 07:25:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:52.914 ************************************ 00:19:52.914 END TEST nvmf_aer 00:19:52.914 ************************************ 00:19:52.914 07:25:54 -- common/autotest_common.sh@10 -- # set +x 00:19:53.172 07:25:54 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:53.172 07:25:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:53.172 07:25:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:53.172 07:25:54 -- common/autotest_common.sh@10 -- # set +x 00:19:53.172 ************************************ 00:19:53.172 START TEST nvmf_async_init 00:19:53.172 ************************************ 00:19:53.172 07:25:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:53.172 * Looking for test storage... 00:19:53.172 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:53.172 07:25:54 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:53.172 07:25:54 -- nvmf/common.sh@7 -- # uname -s 00:19:53.172 07:25:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:53.172 07:25:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:53.173 07:25:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:53.173 07:25:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:53.173 07:25:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:53.173 07:25:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:53.173 07:25:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:53.173 07:25:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:53.173 07:25:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:53.173 07:25:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:53.173 07:25:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:19:53.173 07:25:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:19:53.173 07:25:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:53.173 07:25:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:53.173 07:25:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:53.173 07:25:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:53.173 07:25:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:53.173 07:25:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:53.173 07:25:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:53.173 07:25:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.173 07:25:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.173 07:25:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.173 07:25:54 -- paths/export.sh@5 -- # export PATH 00:19:53.173 07:25:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.173 07:25:54 -- nvmf/common.sh@46 -- # : 0 00:19:53.173 07:25:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:53.173 07:25:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:53.173 07:25:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:53.173 07:25:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:53.173 07:25:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:53.173 07:25:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:53.173 07:25:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:53.173 07:25:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:53.173 07:25:54 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:19:53.173 07:25:54 -- host/async_init.sh@14 -- # null_block_size=512 00:19:53.173 07:25:54 -- host/async_init.sh@15 -- # null_bdev=null0 00:19:53.173 07:25:54 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:19:53.173 07:25:54 -- host/async_init.sh@20 -- # uuidgen 00:19:53.173 07:25:54 -- host/async_init.sh@20 -- # tr -d - 00:19:53.173 07:25:54 -- host/async_init.sh@20 -- # nguid=a1b30cad3b63470b9b20600998833862 00:19:53.173 07:25:54 -- host/async_init.sh@22 -- # nvmftestinit 00:19:53.173 07:25:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:53.173 07:25:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:53.173 07:25:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:53.173 07:25:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:53.173 07:25:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:53.173 07:25:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.173 07:25:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:53.173 07:25:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.173 07:25:54 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:53.173 07:25:54 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:53.173 07:25:54 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:53.173 07:25:54 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:53.173 07:25:54 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:53.173 07:25:54 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:53.173 07:25:54 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:53.173 07:25:54 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:53.173 07:25:54 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:53.173 07:25:54 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:53.173 07:25:54 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:53.173 07:25:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:53.173 07:25:54 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:53.173 07:25:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:53.173 07:25:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:53.173 07:25:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:53.173 07:25:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:53.173 07:25:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:53.173 07:25:54 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:53.173 07:25:54 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:53.173 Cannot find device "nvmf_tgt_br" 00:19:53.173 07:25:54 -- nvmf/common.sh@154 -- # true 00:19:53.173 07:25:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:53.173 Cannot find device "nvmf_tgt_br2" 00:19:53.173 07:25:54 -- nvmf/common.sh@155 -- # true 00:19:53.173 07:25:54 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:53.173 07:25:54 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:53.173 Cannot find device "nvmf_tgt_br" 00:19:53.173 07:25:54 -- nvmf/common.sh@157 -- # true 00:19:53.173 07:25:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:53.173 Cannot find device "nvmf_tgt_br2" 00:19:53.173 07:25:54 -- nvmf/common.sh@158 -- # true 00:19:53.173 07:25:54 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:53.432 07:25:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:53.432 07:25:55 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:53.432 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:53.432 07:25:55 -- nvmf/common.sh@161 -- # true 00:19:53.432 07:25:55 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:53.432 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:53.432 07:25:55 -- nvmf/common.sh@162 -- # true 00:19:53.432 07:25:55 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:53.432 07:25:55 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:53.432 07:25:55 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:53.432 07:25:55 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:53.432 07:25:55 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:53.432 07:25:55 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:53.432 07:25:55 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:53.432 07:25:55 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:53.432 07:25:55 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:53.432 07:25:55 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:53.432 07:25:55 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:53.432 07:25:55 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:53.432 07:25:55 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:53.432 07:25:55 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:53.432 07:25:55 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:53.432 07:25:55 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:53.432 07:25:55 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:53.432 07:25:55 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:53.432 07:25:55 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:53.432 07:25:55 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:53.432 07:25:55 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:53.432 07:25:55 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:53.432 07:25:55 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:53.432 07:25:55 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:53.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:53.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:19:53.432 00:19:53.432 --- 10.0.0.2 ping statistics --- 00:19:53.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.432 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:19:53.432 07:25:55 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:53.432 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:53.432 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:19:53.432 00:19:53.432 --- 10.0.0.3 ping statistics --- 00:19:53.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.432 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:19:53.432 07:25:55 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:53.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:53.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:19:53.432 00:19:53.432 --- 10.0.0.1 ping statistics --- 00:19:53.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.432 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:19:53.432 07:25:55 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:53.432 07:25:55 -- nvmf/common.sh@421 -- # return 0 00:19:53.432 07:25:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:53.432 07:25:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:53.432 07:25:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:53.432 07:25:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:53.432 07:25:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:53.432 07:25:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:53.432 07:25:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:53.691 07:25:55 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:19:53.691 07:25:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:53.691 07:25:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:53.691 07:25:55 -- common/autotest_common.sh@10 -- # set +x 00:19:53.691 07:25:55 -- nvmf/common.sh@469 -- # nvmfpid=92926 00:19:53.691 07:25:55 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:53.691 07:25:55 -- nvmf/common.sh@470 -- # waitforlisten 92926 00:19:53.691 07:25:55 -- common/autotest_common.sh@819 -- # '[' -z 92926 ']' 00:19:53.691 07:25:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.691 07:25:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:53.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.691 07:25:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.691 07:25:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:53.691 07:25:55 -- common/autotest_common.sh@10 -- # set +x 00:19:53.691 [2024-11-04 07:25:55.343083] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:53.691 [2024-11-04 07:25:55.343178] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:53.691 [2024-11-04 07:25:55.483636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.950 [2024-11-04 07:25:55.547453] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:53.950 [2024-11-04 07:25:55.547581] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.950 [2024-11-04 07:25:55.547593] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.950 [2024-11-04 07:25:55.547601] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.950 [2024-11-04 07:25:55.547629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.517 07:25:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:54.517 07:25:56 -- common/autotest_common.sh@852 -- # return 0 00:19:54.517 07:25:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:54.517 07:25:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:54.517 07:25:56 -- common/autotest_common.sh@10 -- # set +x 00:19:54.517 07:25:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:54.517 07:25:56 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:19:54.517 07:25:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:54.517 07:25:56 -- common/autotest_common.sh@10 -- # set +x 00:19:54.517 [2024-11-04 07:25:56.335056] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:54.517 07:25:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:54.517 07:25:56 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:19:54.517 07:25:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:54.517 07:25:56 -- common/autotest_common.sh@10 -- # set +x 00:19:54.517 null0 00:19:54.517 07:25:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:54.517 07:25:56 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:19:54.517 07:25:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:54.517 07:25:56 -- common/autotest_common.sh@10 -- # set +x 00:19:54.517 07:25:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:54.517 07:25:56 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:19:54.517 07:25:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:54.517 07:25:56 -- common/autotest_common.sh@10 -- # set +x 00:19:54.776 07:25:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:54.776 07:25:56 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g a1b30cad3b63470b9b20600998833862 00:19:54.776 07:25:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:54.776 07:25:56 -- common/autotest_common.sh@10 -- # set +x 00:19:54.776 07:25:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:54.776 07:25:56 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:54.776 07:25:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:54.776 07:25:56 -- common/autotest_common.sh@10 -- # set +x 00:19:54.776 [2024-11-04 07:25:56.375176] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:54.776 07:25:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:54.776 07:25:56 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:19:54.776 07:25:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:54.776 07:25:56 -- common/autotest_common.sh@10 -- # set +x 00:19:54.776 nvme0n1 00:19:54.776 07:25:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:54.776 07:25:56 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:54.776 07:25:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:54.776 07:25:56 -- common/autotest_common.sh@10 -- # set +x 00:19:55.034 [ 00:19:55.034 { 00:19:55.034 "aliases": [ 00:19:55.034 "a1b30cad-3b63-470b-9b20-600998833862" 00:19:55.034 ], 00:19:55.034 "assigned_rate_limits": { 00:19:55.034 "r_mbytes_per_sec": 0, 00:19:55.034 "rw_ios_per_sec": 0, 00:19:55.034 "rw_mbytes_per_sec": 0, 00:19:55.034 "w_mbytes_per_sec": 0 00:19:55.034 }, 00:19:55.034 "block_size": 512, 00:19:55.034 "claimed": false, 00:19:55.034 "driver_specific": { 00:19:55.034 "mp_policy": "active_passive", 00:19:55.034 "nvme": [ 00:19:55.034 { 00:19:55.034 "ctrlr_data": { 00:19:55.034 "ana_reporting": false, 00:19:55.034 "cntlid": 1, 00:19:55.034 "firmware_revision": "24.01.1", 00:19:55.034 "model_number": "SPDK bdev Controller", 00:19:55.034 "multi_ctrlr": true, 00:19:55.034 "oacs": { 00:19:55.034 "firmware": 0, 00:19:55.034 "format": 0, 00:19:55.034 "ns_manage": 0, 00:19:55.034 "security": 0 00:19:55.034 }, 00:19:55.034 "serial_number": "00000000000000000000", 00:19:55.034 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:55.034 "vendor_id": "0x8086" 00:19:55.034 }, 00:19:55.034 "ns_data": { 00:19:55.034 "can_share": true, 00:19:55.034 "id": 1 00:19:55.034 }, 00:19:55.034 "trid": { 00:19:55.034 "adrfam": "IPv4", 00:19:55.034 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:55.034 "traddr": "10.0.0.2", 00:19:55.034 "trsvcid": "4420", 00:19:55.034 "trtype": "TCP" 00:19:55.034 }, 00:19:55.034 "vs": { 00:19:55.034 "nvme_version": "1.3" 00:19:55.034 } 00:19:55.034 } 00:19:55.034 ] 00:19:55.034 }, 00:19:55.034 "name": "nvme0n1", 00:19:55.035 "num_blocks": 2097152, 00:19:55.035 "product_name": "NVMe disk", 00:19:55.035 "supported_io_types": { 00:19:55.035 "abort": true, 00:19:55.035 "compare": true, 00:19:55.035 "compare_and_write": true, 00:19:55.035 "flush": true, 00:19:55.035 "nvme_admin": true, 00:19:55.035 "nvme_io": true, 00:19:55.035 "read": true, 00:19:55.035 "reset": true, 00:19:55.035 "unmap": false, 00:19:55.035 "write": true, 00:19:55.035 "write_zeroes": true 00:19:55.035 }, 00:19:55.035 "uuid": "a1b30cad-3b63-470b-9b20-600998833862", 00:19:55.035 "zoned": false 00:19:55.035 } 00:19:55.035 ] 00:19:55.035 07:25:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.035 07:25:56 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:19:55.035 07:25:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.035 07:25:56 -- common/autotest_common.sh@10 -- # set +x 00:19:55.035 [2024-11-04 07:25:56.639141] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:55.035 [2024-11-04 07:25:56.639411] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e7a00 (9): Bad file descriptor 00:19:55.035 [2024-11-04 07:25:56.771015] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:55.035 07:25:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.035 07:25:56 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:55.035 07:25:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.035 07:25:56 -- common/autotest_common.sh@10 -- # set +x 00:19:55.035 [ 00:19:55.035 { 00:19:55.035 "aliases": [ 00:19:55.035 "a1b30cad-3b63-470b-9b20-600998833862" 00:19:55.035 ], 00:19:55.035 "assigned_rate_limits": { 00:19:55.035 "r_mbytes_per_sec": 0, 00:19:55.035 "rw_ios_per_sec": 0, 00:19:55.035 "rw_mbytes_per_sec": 0, 00:19:55.035 "w_mbytes_per_sec": 0 00:19:55.035 }, 00:19:55.035 "block_size": 512, 00:19:55.035 "claimed": false, 00:19:55.035 "driver_specific": { 00:19:55.035 "mp_policy": "active_passive", 00:19:55.035 "nvme": [ 00:19:55.035 { 00:19:55.035 "ctrlr_data": { 00:19:55.035 "ana_reporting": false, 00:19:55.035 "cntlid": 2, 00:19:55.035 "firmware_revision": "24.01.1", 00:19:55.035 "model_number": "SPDK bdev Controller", 00:19:55.035 "multi_ctrlr": true, 00:19:55.035 "oacs": { 00:19:55.035 "firmware": 0, 00:19:55.035 "format": 0, 00:19:55.035 "ns_manage": 0, 00:19:55.035 "security": 0 00:19:55.035 }, 00:19:55.035 "serial_number": "00000000000000000000", 00:19:55.035 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:55.035 "vendor_id": "0x8086" 00:19:55.035 }, 00:19:55.035 "ns_data": { 00:19:55.035 "can_share": true, 00:19:55.035 "id": 1 00:19:55.035 }, 00:19:55.035 "trid": { 00:19:55.035 "adrfam": "IPv4", 00:19:55.035 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:55.035 "traddr": "10.0.0.2", 00:19:55.035 "trsvcid": "4420", 00:19:55.035 "trtype": "TCP" 00:19:55.035 }, 00:19:55.035 "vs": { 00:19:55.035 "nvme_version": "1.3" 00:19:55.035 } 00:19:55.035 } 00:19:55.035 ] 00:19:55.035 }, 00:19:55.035 "name": "nvme0n1", 00:19:55.035 "num_blocks": 2097152, 00:19:55.035 "product_name": "NVMe disk", 00:19:55.035 "supported_io_types": { 00:19:55.035 "abort": true, 00:19:55.035 "compare": true, 00:19:55.035 "compare_and_write": true, 00:19:55.035 "flush": true, 00:19:55.035 "nvme_admin": true, 00:19:55.035 "nvme_io": true, 00:19:55.035 "read": true, 00:19:55.035 "reset": true, 00:19:55.035 "unmap": false, 00:19:55.035 "write": true, 00:19:55.035 "write_zeroes": true 00:19:55.035 }, 00:19:55.035 "uuid": "a1b30cad-3b63-470b-9b20-600998833862", 00:19:55.035 "zoned": false 00:19:55.035 } 00:19:55.035 ] 00:19:55.035 07:25:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.035 07:25:56 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.035 07:25:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.035 07:25:56 -- common/autotest_common.sh@10 -- # set +x 00:19:55.035 07:25:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.035 07:25:56 -- host/async_init.sh@53 -- # mktemp 00:19:55.035 07:25:56 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.l0g0myUrS2 00:19:55.035 07:25:56 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:55.035 07:25:56 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.l0g0myUrS2 00:19:55.035 07:25:56 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:19:55.035 07:25:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.035 07:25:56 -- common/autotest_common.sh@10 -- # set +x 00:19:55.035 07:25:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.035 07:25:56 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:19:55.035 07:25:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.035 07:25:56 -- common/autotest_common.sh@10 -- # set +x 00:19:55.035 [2024-11-04 07:25:56.843315] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:55.035 [2024-11-04 07:25:56.843430] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:55.035 07:25:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.035 07:25:56 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.l0g0myUrS2 00:19:55.035 07:25:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.035 07:25:56 -- common/autotest_common.sh@10 -- # set +x 00:19:55.035 07:25:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.035 07:25:56 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.l0g0myUrS2 00:19:55.035 07:25:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.035 07:25:56 -- common/autotest_common.sh@10 -- # set +x 00:19:55.035 [2024-11-04 07:25:56.859300] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:55.294 nvme0n1 00:19:55.294 07:25:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.294 07:25:56 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:55.294 07:25:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.294 07:25:56 -- common/autotest_common.sh@10 -- # set +x 00:19:55.294 [ 00:19:55.294 { 00:19:55.294 "aliases": [ 00:19:55.294 "a1b30cad-3b63-470b-9b20-600998833862" 00:19:55.294 ], 00:19:55.294 "assigned_rate_limits": { 00:19:55.294 "r_mbytes_per_sec": 0, 00:19:55.294 "rw_ios_per_sec": 0, 00:19:55.294 "rw_mbytes_per_sec": 0, 00:19:55.294 "w_mbytes_per_sec": 0 00:19:55.294 }, 00:19:55.294 "block_size": 512, 00:19:55.294 "claimed": false, 00:19:55.294 "driver_specific": { 00:19:55.294 "mp_policy": "active_passive", 00:19:55.294 "nvme": [ 00:19:55.294 { 00:19:55.294 "ctrlr_data": { 00:19:55.294 "ana_reporting": false, 00:19:55.294 "cntlid": 3, 00:19:55.294 "firmware_revision": "24.01.1", 00:19:55.294 "model_number": "SPDK bdev Controller", 00:19:55.294 "multi_ctrlr": true, 00:19:55.294 "oacs": { 00:19:55.294 "firmware": 0, 00:19:55.294 "format": 0, 00:19:55.294 "ns_manage": 0, 00:19:55.294 "security": 0 00:19:55.294 }, 00:19:55.294 "serial_number": "00000000000000000000", 00:19:55.294 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:55.294 "vendor_id": "0x8086" 00:19:55.294 }, 00:19:55.294 "ns_data": { 00:19:55.294 "can_share": true, 00:19:55.294 "id": 1 00:19:55.294 }, 00:19:55.294 "trid": { 00:19:55.294 "adrfam": "IPv4", 00:19:55.294 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:55.294 "traddr": "10.0.0.2", 00:19:55.294 "trsvcid": "4421", 00:19:55.294 "trtype": "TCP" 00:19:55.294 }, 00:19:55.294 "vs": { 00:19:55.294 "nvme_version": "1.3" 00:19:55.294 } 00:19:55.294 } 00:19:55.294 ] 00:19:55.294 }, 00:19:55.294 "name": "nvme0n1", 00:19:55.294 "num_blocks": 2097152, 00:19:55.294 "product_name": "NVMe disk", 00:19:55.294 "supported_io_types": { 00:19:55.294 "abort": true, 00:19:55.294 "compare": true, 00:19:55.294 "compare_and_write": true, 00:19:55.294 "flush": true, 00:19:55.294 "nvme_admin": true, 00:19:55.294 "nvme_io": true, 00:19:55.294 "read": true, 00:19:55.294 "reset": true, 00:19:55.294 "unmap": false, 00:19:55.294 "write": true, 00:19:55.294 "write_zeroes": true 00:19:55.294 }, 00:19:55.294 "uuid": "a1b30cad-3b63-470b-9b20-600998833862", 00:19:55.294 "zoned": false 00:19:55.294 } 00:19:55.294 ] 00:19:55.294 07:25:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.294 07:25:56 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.294 07:25:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.294 07:25:56 -- common/autotest_common.sh@10 -- # set +x 00:19:55.294 07:25:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.294 07:25:56 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.l0g0myUrS2 00:19:55.294 07:25:56 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:19:55.294 07:25:56 -- host/async_init.sh@78 -- # nvmftestfini 00:19:55.294 07:25:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:55.294 07:25:56 -- nvmf/common.sh@116 -- # sync 00:19:55.294 07:25:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:55.294 07:25:57 -- nvmf/common.sh@119 -- # set +e 00:19:55.294 07:25:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:55.294 07:25:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:55.294 rmmod nvme_tcp 00:19:55.294 rmmod nvme_fabrics 00:19:55.294 rmmod nvme_keyring 00:19:55.294 07:25:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:55.294 07:25:57 -- nvmf/common.sh@123 -- # set -e 00:19:55.294 07:25:57 -- nvmf/common.sh@124 -- # return 0 00:19:55.294 07:25:57 -- nvmf/common.sh@477 -- # '[' -n 92926 ']' 00:19:55.294 07:25:57 -- nvmf/common.sh@478 -- # killprocess 92926 00:19:55.294 07:25:57 -- common/autotest_common.sh@926 -- # '[' -z 92926 ']' 00:19:55.294 07:25:57 -- common/autotest_common.sh@930 -- # kill -0 92926 00:19:55.294 07:25:57 -- common/autotest_common.sh@931 -- # uname 00:19:55.294 07:25:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:55.294 07:25:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 92926 00:19:55.553 killing process with pid 92926 00:19:55.553 07:25:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:55.553 07:25:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:55.553 07:25:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 92926' 00:19:55.553 07:25:57 -- common/autotest_common.sh@945 -- # kill 92926 00:19:55.553 07:25:57 -- common/autotest_common.sh@950 -- # wait 92926 00:19:55.553 07:25:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:55.553 07:25:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:55.553 07:25:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:55.553 07:25:57 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:55.553 07:25:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:55.553 07:25:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:55.553 07:25:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:55.553 07:25:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.553 07:25:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:55.553 00:19:55.553 real 0m2.581s 00:19:55.553 user 0m2.409s 00:19:55.553 sys 0m0.593s 00:19:55.553 07:25:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:55.553 07:25:57 -- common/autotest_common.sh@10 -- # set +x 00:19:55.553 ************************************ 00:19:55.553 END TEST nvmf_async_init 00:19:55.553 ************************************ 00:19:55.812 07:25:57 -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:19:55.812 07:25:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:55.812 07:25:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:55.812 07:25:57 -- common/autotest_common.sh@10 -- # set +x 00:19:55.812 ************************************ 00:19:55.812 START TEST dma 00:19:55.812 ************************************ 00:19:55.812 07:25:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:19:55.812 * Looking for test storage... 00:19:55.812 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:55.812 07:25:57 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:55.812 07:25:57 -- nvmf/common.sh@7 -- # uname -s 00:19:55.812 07:25:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:55.812 07:25:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:55.812 07:25:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:55.812 07:25:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:55.812 07:25:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:55.812 07:25:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:55.812 07:25:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:55.812 07:25:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:55.812 07:25:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:55.812 07:25:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:55.812 07:25:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:19:55.812 07:25:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:19:55.812 07:25:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:55.812 07:25:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:55.812 07:25:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:55.812 07:25:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:55.812 07:25:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:55.812 07:25:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:55.812 07:25:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:55.813 07:25:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.813 07:25:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.813 07:25:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.813 07:25:57 -- paths/export.sh@5 -- # export PATH 00:19:55.813 07:25:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.813 07:25:57 -- nvmf/common.sh@46 -- # : 0 00:19:55.813 07:25:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:55.813 07:25:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:55.813 07:25:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:55.813 07:25:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:55.813 07:25:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:55.813 07:25:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:55.813 07:25:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:55.813 07:25:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:55.813 07:25:57 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:19:55.813 07:25:57 -- host/dma.sh@13 -- # exit 0 00:19:55.813 ************************************ 00:19:55.813 END TEST dma 00:19:55.813 ************************************ 00:19:55.813 00:19:55.813 real 0m0.111s 00:19:55.813 user 0m0.052s 00:19:55.813 sys 0m0.064s 00:19:55.813 07:25:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:55.813 07:25:57 -- common/autotest_common.sh@10 -- # set +x 00:19:55.813 07:25:57 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:55.813 07:25:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:55.813 07:25:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:55.813 07:25:57 -- common/autotest_common.sh@10 -- # set +x 00:19:55.813 ************************************ 00:19:55.813 START TEST nvmf_identify 00:19:55.813 ************************************ 00:19:55.813 07:25:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:56.072 * Looking for test storage... 00:19:56.072 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:56.072 07:25:57 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:56.072 07:25:57 -- nvmf/common.sh@7 -- # uname -s 00:19:56.072 07:25:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:56.072 07:25:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:56.072 07:25:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:56.072 07:25:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:56.072 07:25:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:56.072 07:25:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:56.072 07:25:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:56.072 07:25:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:56.072 07:25:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:56.072 07:25:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:56.072 07:25:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:19:56.072 07:25:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:19:56.072 07:25:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:56.072 07:25:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:56.072 07:25:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:56.072 07:25:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:56.072 07:25:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:56.072 07:25:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:56.072 07:25:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:56.072 07:25:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.073 07:25:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.073 07:25:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.073 07:25:57 -- paths/export.sh@5 -- # export PATH 00:19:56.073 07:25:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.073 07:25:57 -- nvmf/common.sh@46 -- # : 0 00:19:56.073 07:25:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:56.073 07:25:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:56.073 07:25:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:56.073 07:25:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:56.073 07:25:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:56.073 07:25:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:56.073 07:25:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:56.073 07:25:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:56.073 07:25:57 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:56.073 07:25:57 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:56.073 07:25:57 -- host/identify.sh@14 -- # nvmftestinit 00:19:56.073 07:25:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:56.073 07:25:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:56.073 07:25:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:56.073 07:25:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:56.073 07:25:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:56.073 07:25:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.073 07:25:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:56.073 07:25:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.073 07:25:57 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:56.073 07:25:57 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:56.073 07:25:57 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:56.073 07:25:57 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:56.073 07:25:57 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:56.073 07:25:57 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:56.073 07:25:57 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:56.073 07:25:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:56.073 07:25:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:56.073 07:25:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:56.073 07:25:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:56.073 07:25:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:56.073 07:25:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:56.073 07:25:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:56.073 07:25:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:56.073 07:25:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:56.073 07:25:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:56.073 07:25:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:56.073 07:25:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:56.073 07:25:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:56.073 Cannot find device "nvmf_tgt_br" 00:19:56.073 07:25:57 -- nvmf/common.sh@154 -- # true 00:19:56.073 07:25:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:56.073 Cannot find device "nvmf_tgt_br2" 00:19:56.073 07:25:57 -- nvmf/common.sh@155 -- # true 00:19:56.073 07:25:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:56.073 07:25:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:56.073 Cannot find device "nvmf_tgt_br" 00:19:56.073 07:25:57 -- nvmf/common.sh@157 -- # true 00:19:56.073 07:25:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:56.073 Cannot find device "nvmf_tgt_br2" 00:19:56.073 07:25:57 -- nvmf/common.sh@158 -- # true 00:19:56.073 07:25:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:56.073 07:25:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:56.073 07:25:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:56.073 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:56.073 07:25:57 -- nvmf/common.sh@161 -- # true 00:19:56.073 07:25:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:56.073 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:56.073 07:25:57 -- nvmf/common.sh@162 -- # true 00:19:56.073 07:25:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:56.073 07:25:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:56.073 07:25:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:56.073 07:25:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:56.073 07:25:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:56.073 07:25:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:56.332 07:25:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:56.332 07:25:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:56.332 07:25:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:56.332 07:25:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:56.332 07:25:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:56.332 07:25:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:56.332 07:25:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:56.332 07:25:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:56.332 07:25:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:56.332 07:25:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:56.332 07:25:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:56.332 07:25:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:56.332 07:25:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:56.332 07:25:57 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:56.332 07:25:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:56.332 07:25:58 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:56.332 07:25:58 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:56.332 07:25:58 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:56.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:56.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:19:56.332 00:19:56.332 --- 10.0.0.2 ping statistics --- 00:19:56.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.332 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:19:56.332 07:25:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:56.332 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:56.332 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:19:56.332 00:19:56.332 --- 10.0.0.3 ping statistics --- 00:19:56.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.332 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:19:56.332 07:25:58 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:56.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:56.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:19:56.332 00:19:56.332 --- 10.0.0.1 ping statistics --- 00:19:56.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.332 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:19:56.332 07:25:58 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:56.332 07:25:58 -- nvmf/common.sh@421 -- # return 0 00:19:56.332 07:25:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:56.332 07:25:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:56.332 07:25:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:56.332 07:25:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:56.332 07:25:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:56.332 07:25:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:56.332 07:25:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:56.332 07:25:58 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:19:56.332 07:25:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:56.332 07:25:58 -- common/autotest_common.sh@10 -- # set +x 00:19:56.332 07:25:58 -- host/identify.sh@19 -- # nvmfpid=93189 00:19:56.332 07:25:58 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:56.332 07:25:58 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:56.332 07:25:58 -- host/identify.sh@23 -- # waitforlisten 93189 00:19:56.332 07:25:58 -- common/autotest_common.sh@819 -- # '[' -z 93189 ']' 00:19:56.332 07:25:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.332 07:25:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:56.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.332 07:25:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.332 07:25:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:56.332 07:25:58 -- common/autotest_common.sh@10 -- # set +x 00:19:56.332 [2024-11-04 07:25:58.126818] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:56.332 [2024-11-04 07:25:58.126921] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:56.591 [2024-11-04 07:25:58.268005] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:56.591 [2024-11-04 07:25:58.327782] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:56.591 [2024-11-04 07:25:58.327958] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:56.591 [2024-11-04 07:25:58.327973] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:56.591 [2024-11-04 07:25:58.327980] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:56.591 [2024-11-04 07:25:58.328146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:56.591 [2024-11-04 07:25:58.328682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.591 [2024-11-04 07:25:58.329028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:56.591 [2024-11-04 07:25:58.329039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.554 07:25:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:57.554 07:25:59 -- common/autotest_common.sh@852 -- # return 0 00:19:57.554 07:25:59 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:57.554 07:25:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:57.554 07:25:59 -- common/autotest_common.sh@10 -- # set +x 00:19:57.554 [2024-11-04 07:25:59.167664] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:57.554 07:25:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:57.554 07:25:59 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:19:57.554 07:25:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:57.554 07:25:59 -- common/autotest_common.sh@10 -- # set +x 00:19:57.554 07:25:59 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:57.554 07:25:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:57.554 07:25:59 -- common/autotest_common.sh@10 -- # set +x 00:19:57.554 Malloc0 00:19:57.554 07:25:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:57.554 07:25:59 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:57.554 07:25:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:57.554 07:25:59 -- common/autotest_common.sh@10 -- # set +x 00:19:57.554 07:25:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:57.554 07:25:59 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:19:57.554 07:25:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:57.554 07:25:59 -- common/autotest_common.sh@10 -- # set +x 00:19:57.554 07:25:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:57.554 07:25:59 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:57.554 07:25:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:57.554 07:25:59 -- common/autotest_common.sh@10 -- # set +x 00:19:57.554 [2024-11-04 07:25:59.273973] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:57.554 07:25:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:57.554 07:25:59 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:57.554 07:25:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:57.555 07:25:59 -- common/autotest_common.sh@10 -- # set +x 00:19:57.555 07:25:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:57.555 07:25:59 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:19:57.555 07:25:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:57.555 07:25:59 -- common/autotest_common.sh@10 -- # set +x 00:19:57.555 [2024-11-04 07:25:59.289702] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:19:57.555 [ 00:19:57.555 { 00:19:57.555 "allow_any_host": true, 00:19:57.555 "hosts": [], 00:19:57.555 "listen_addresses": [ 00:19:57.555 { 00:19:57.555 "adrfam": "IPv4", 00:19:57.555 "traddr": "10.0.0.2", 00:19:57.555 "transport": "TCP", 00:19:57.555 "trsvcid": "4420", 00:19:57.555 "trtype": "TCP" 00:19:57.555 } 00:19:57.555 ], 00:19:57.555 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:57.555 "subtype": "Discovery" 00:19:57.555 }, 00:19:57.555 { 00:19:57.555 "allow_any_host": true, 00:19:57.555 "hosts": [], 00:19:57.555 "listen_addresses": [ 00:19:57.555 { 00:19:57.555 "adrfam": "IPv4", 00:19:57.555 "traddr": "10.0.0.2", 00:19:57.555 "transport": "TCP", 00:19:57.555 "trsvcid": "4420", 00:19:57.555 "trtype": "TCP" 00:19:57.555 } 00:19:57.555 ], 00:19:57.555 "max_cntlid": 65519, 00:19:57.555 "max_namespaces": 32, 00:19:57.555 "min_cntlid": 1, 00:19:57.555 "model_number": "SPDK bdev Controller", 00:19:57.555 "namespaces": [ 00:19:57.555 { 00:19:57.555 "bdev_name": "Malloc0", 00:19:57.555 "eui64": "ABCDEF0123456789", 00:19:57.555 "name": "Malloc0", 00:19:57.555 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:19:57.555 "nsid": 1, 00:19:57.555 "uuid": "5b65e086-fd4d-4ab4-9649-420718a94f25" 00:19:57.555 } 00:19:57.555 ], 00:19:57.555 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.555 "serial_number": "SPDK00000000000001", 00:19:57.555 "subtype": "NVMe" 00:19:57.555 } 00:19:57.555 ] 00:19:57.555 07:25:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:57.555 07:25:59 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:19:57.555 [2024-11-04 07:25:59.328366] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:57.555 [2024-11-04 07:25:59.328431] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93242 ] 00:19:57.830 [2024-11-04 07:25:59.465980] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:19:57.830 [2024-11-04 07:25:59.466053] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:57.830 [2024-11-04 07:25:59.466060] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:57.830 [2024-11-04 07:25:59.466069] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:57.830 [2024-11-04 07:25:59.466078] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:57.830 [2024-11-04 07:25:59.466202] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:19:57.830 [2024-11-04 07:25:59.466287] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2283510 0 00:19:57.830 [2024-11-04 07:25:59.478928] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:57.830 [2024-11-04 07:25:59.478966] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:57.830 [2024-11-04 07:25:59.478972] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:57.830 [2024-11-04 07:25:59.478975] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:57.830 [2024-11-04 07:25:59.479019] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:57.830 [2024-11-04 07:25:59.479026] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:57.830 [2024-11-04 07:25:59.479030] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2283510) 00:19:57.830 [2024-11-04 07:25:59.479043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:57.830 [2024-11-04 07:25:59.479072] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22cf8a0, cid 0, qid 0 00:19:57.830 [2024-11-04 07:25:59.486938] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:57.830 [2024-11-04 07:25:59.486957] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:57.830 [2024-11-04 07:25:59.486978] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:57.830 [2024-11-04 07:25:59.486983] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22cf8a0) on tqpair=0x2283510 00:19:57.830 [2024-11-04 07:25:59.486997] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:57.830 [2024-11-04 07:25:59.487005] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:19:57.830 [2024-11-04 07:25:59.487010] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:19:57.830 [2024-11-04 07:25:59.487026] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:57.830 [2024-11-04 07:25:59.487031] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:57.830 [2024-11-04 07:25:59.487034] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2283510) 00:19:57.830 [2024-11-04 07:25:59.487042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.830 [2024-11-04 07:25:59.487069] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22cf8a0, cid 0, qid 0 00:19:57.830 [2024-11-04 07:25:59.487147] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:57.830 [2024-11-04 07:25:59.487153] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:57.830 [2024-11-04 07:25:59.487157] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:57.830 [2024-11-04 07:25:59.487160] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22cf8a0) on tqpair=0x2283510 00:19:57.830 [2024-11-04 07:25:59.487166] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:19:57.830 [2024-11-04 07:25:59.487173] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:19:57.830 [2024-11-04 07:25:59.487180] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:57.830 [2024-11-04 07:25:59.487183] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:57.830 [2024-11-04 07:25:59.487187] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2283510) 00:19:57.830 [2024-11-04 07:25:59.487193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.830 [2024-11-04 07:25:59.487240] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22cf8a0, cid 0, qid 0 00:19:57.830 [2024-11-04 07:25:59.487308] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:57.830 [2024-11-04 07:25:59.487314] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:57.830 [2024-11-04 07:25:59.487318] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:57.830 [2024-11-04 07:25:59.487321] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22cf8a0) on tqpair=0x2283510 00:19:57.830 [2024-11-04 07:25:59.487328] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:19:57.830 [2024-11-04 07:25:59.487336] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:19:57.830 [2024-11-04 07:25:59.487342] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:57.830 [2024-11-04 07:25:59.487346] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:57.830 [2024-11-04 07:25:59.487349] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2283510) 00:19:57.830 [2024-11-04 07:25:59.487356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.830 [2024-11-04 07:25:59.487374] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22cf8a0, cid 0, qid 0 00:19:57.831 [2024-11-04 07:25:59.487442] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:57.831 [2024-11-04 07:25:59.487449] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:57.831 [2024-11-04 07:25:59.487452] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:57.831 [2024-11-04 07:25:59.487456] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22cf8a0) on tqpair=0x2283510 00:19:57.831 [2024-11-04 07:25:59.487478] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:57.831 [2024-11-04 07:25:59.487487] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:57.831 [2024-11-04 07:25:59.487491] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:57.831 [2024-11-04 07:25:59.487495] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2283510) 00:19:57.831 [2024-11-04 07:25:59.487502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.831 [2024-11-04 07:25:59.487520] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22cf8a0, cid 0, qid 0 00:19:57.831 [2024-11-04 07:25:59.487584] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:57.831 [2024-11-04 07:25:59.487590] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:57.831 [2024-11-04 07:25:59.487593] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:57.831 [2024-11-04 07:25:59.487597] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22cf8a0) on tqpair=0x2283510 00:19:57.831 [2024-11-04 07:25:59.487603] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:19:57.831 [2024-11-04 07:25:59.487608] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:19:57.831 [2024-11-04 07:25:59.487615] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:57.831 [2024-11-04 07:25:59.487720] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:19:57.831 [2024-11-04 07:25:59.487725] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:57.831 [2024-11-04 07:25:59.487733] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:57.831 [2024-11-04 07:25:59.487737] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:57.831 [2024-11-04 07:25:59.487741] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2283510) 00:19:57.831 [2024-11-04 07:25:59.487748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.831 [2024-11-04 07:25:59.487766] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22cf8a0, cid 0, qid 0 00:19:57.831 [2024-11-04 07:25:59.487836] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:57.831 [2024-11-04 07:25:59.487844] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:57.831 [2024-11-04 07:25:59.487848] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:57.831 [2024-11-04 07:25:59.487851] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22cf8a0) on tqpair=0x2283510 00:19:57.831 [2024-11-04 07:25:59.487857] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:57.831 [2024-11-04 07:25:59.487867] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:57.831 [2024-11-04 07:25:59.487871] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:57.831 [2024-11-04 07:25:59.487874] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2283510) 00:19:57.831 [2024-11-04 07:25:59.487881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.831 [2024-11-04 07:25:59.487900] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22cf8a0, cid 0, qid 0 00:19:57.831 [2024-11-04 07:25:59.487988] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:57.831 [2024-11-04 07:25:59.487997] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:57.831 [2024-11-04 07:25:59.488000] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:57.831 [2024-11-04 07:25:59.488004] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22cf8a0) on tqpair=0x2283510 00:19:57.831 [2024-11-04 07:25:59.488010] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:57.831 [2024-11-04 07:25:59.488015] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:19:57.831 [2024-11-04 07:25:59.488023] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:19:57.831 [2024-11-04 07:25:59.488038] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:19:57.831 [2024-11-04 07:25:59.488048] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:57.831 [2024-11-04 07:25:59.488053] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:57.831 [2024-11-04 07:25:59.488056] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2283510) 00:19:57.831 [2024-11-04 07:25:59.488064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.831 [2024-11-04 07:25:59.488086] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22cf8a0, cid 0, qid 0 00:19:57.831 [2024-11-04 07:25:59.488185] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:57.831 [2024-11-04 07:25:59.488192] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:57.831 [2024-11-04 07:25:59.488196] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:57.831 [2024-11-04 07:25:59.488200] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2283510): datao=0, datal=4096, cccid=0 00:19:57.831 [2024-11-04 07:25:59.488204] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22cf8a0) on tqpair(0x2283510): expected_datao=0, payload_size=4096 00:19:57.831 [2024-11-04 07:25:59.488213] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:57.831 [2024-11-04 07:25:59.488218] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:57.831 [2024-11-04 07:25:59.488227] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:57.831 [2024-11-04 07:25:59.488232] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:57.831 [2024-11-04 07:25:59.488236] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:57.831 [2024-11-04 07:25:59.488239] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22cf8a0) on tqpair=0x2283510 00:19:57.831 [2024-11-04 07:25:59.488248] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:19:57.831 [2024-11-04 07:25:59.488254] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:19:57.831 [2024-11-04 07:25:59.488258] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:19:57.831 [2024-11-04 07:25:59.488264] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:19:57.831 [2024-11-04 07:25:59.488268] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:19:57.831 [2024-11-04 07:25:59.488273] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:19:57.831 [2024-11-04 07:25:59.488296] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:19:57.831 [2024-11-04 07:25:59.488304] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:57.831 [2024-11-04 07:25:59.488323] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:57.831 [2024-11-04 07:25:59.488327] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2283510) 00:19:57.831 [2024-11-04 07:25:59.488334] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:57.831 [2024-11-04 07:25:59.488354] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22cf8a0, cid 0, qid 0 00:19:57.831 [2024-11-04 07:25:59.488424] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:57.831 [2024-11-04 07:25:59.488431] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:57.831 [2024-11-04 07:25:59.488434] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:57.831 [2024-11-04 07:25:59.488438] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22cf8a0) on tqpair=0x2283510 00:19:57.831 [2024-11-04 07:25:59.488446] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:57.831 [2024-11-04 07:25:59.488450] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:57.831 [2024-11-04 07:25:59.488453] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2283510) 00:19:57.831 [2024-11-04 07:25:59.488460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.831 [2024-11-04 07:25:59.488466] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:57.831 [2024-11-04 07:25:59.488469] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:57.831 [2024-11-04 07:25:59.488472] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2283510) 00:19:57.831 [2024-11-04 07:25:59.488478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.831 [2024-11-04 07:25:59.488483] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:57.831 [2024-11-04 07:25:59.488487] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:57.831 [2024-11-04 07:25:59.488491] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2283510) 00:19:57.831 [2024-11-04 07:25:59.488496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.831 [2024-11-04 07:25:59.488501] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:57.831 [2024-11-04 07:25:59.488505] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:57.831 [2024-11-04 07:25:59.488508] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2283510) 00:19:57.831 [2024-11-04 07:25:59.488513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.831 [2024-11-04 07:25:59.488518] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:19:57.831 [2024-11-04 07:25:59.488530] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:57.831 [2024-11-04 07:25:59.488537] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:57.831 [2024-11-04 07:25:59.488541] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:57.831 [2024-11-04 07:25:59.488544] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2283510) 00:19:57.831 [2024-11-04 07:25:59.488550] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.831 [2024-11-04 07:25:59.488571] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22cf8a0, cid 0, qid 0 00:19:57.831 [2024-11-04 07:25:59.488577] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22cfa00, cid 1, qid 0 00:19:57.832 [2024-11-04 07:25:59.488582] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22cfb60, cid 2, qid 0 00:19:57.832 [2024-11-04 07:25:59.488586] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22cfcc0, cid 3, qid 0 00:19:57.832 [2024-11-04 07:25:59.488591] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22cfe20, cid 4, qid 0 00:19:57.832 [2024-11-04 07:25:59.488691] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:57.832 [2024-11-04 07:25:59.488697] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:57.832 [2024-11-04 07:25:59.488700] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:57.832 [2024-11-04 07:25:59.488704] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22cfe20) on tqpair=0x2283510 00:19:57.832 [2024-11-04 07:25:59.488710] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:19:57.832 [2024-11-04 07:25:59.488716] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:19:57.832 [2024-11-04 07:25:59.488725] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:57.832 [2024-11-04 07:25:59.488729] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:57.832 [2024-11-04 07:25:59.488733] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2283510) 00:19:57.832 [2024-11-04 07:25:59.488739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.832 [2024-11-04 07:25:59.488758] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22cfe20, cid 4, qid 0 00:19:57.832 [2024-11-04 07:25:59.488835] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:57.832 [2024-11-04 07:25:59.488842] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:57.832 [2024-11-04 07:25:59.488845] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:57.832 [2024-11-04 07:25:59.488849] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2283510): datao=0, datal=4096, cccid=4 00:19:57.832 [2024-11-04 07:25:59.488853] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22cfe20) on tqpair(0x2283510): expected_datao=0, payload_size=4096 00:19:57.832 [2024-11-04 07:25:59.488860] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:57.832 [2024-11-04 07:25:59.488864] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:57.832 [2024-11-04 07:25:59.488872] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:57.832 [2024-11-04 07:25:59.488877] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:57.832 [2024-11-04 07:25:59.488881] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:57.832 [2024-11-04 07:25:59.488884] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22cfe20) on tqpair=0x2283510 00:19:57.832 [2024-11-04 07:25:59.488911] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:19:57.832 [2024-11-04 07:25:59.488958] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:57.832 [2024-11-04 07:25:59.488968] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:57.832 [2024-11-04 07:25:59.488972] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2283510) 00:19:57.832 [2024-11-04 07:25:59.488979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.832 [2024-11-04 07:25:59.488987] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:57.832 [2024-11-04 07:25:59.488990] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:57.832 [2024-11-04 07:25:59.488994] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2283510) 00:19:57.832 [2024-11-04 07:25:59.488999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.832 [2024-11-04 07:25:59.489030] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22cfe20, cid 4, qid 0 00:19:57.832 [2024-11-04 07:25:59.489037] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22cff80, cid 5, qid 0 00:19:57.832 [2024-11-04 07:25:59.489144] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:57.832 [2024-11-04 07:25:59.489150] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:57.832 [2024-11-04 07:25:59.489154] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:57.832 [2024-11-04 07:25:59.489158] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2283510): datao=0, datal=1024, cccid=4 00:19:57.832 [2024-11-04 07:25:59.489163] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22cfe20) on tqpair(0x2283510): expected_datao=0, payload_size=1024 00:19:57.832 [2024-11-04 07:25:59.489170] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:57.832 [2024-11-04 07:25:59.489173] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:57.832 [2024-11-04 07:25:59.489179] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:57.832 [2024-11-04 07:25:59.489184] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:57.832 [2024-11-04 07:25:59.489187] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:57.832 [2024-11-04 07:25:59.489191] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22cff80) on tqpair=0x2283510 00:19:57.832 [2024-11-04 07:25:59.534924] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:57.832 [2024-11-04 07:25:59.534944] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:57.832 [2024-11-04 07:25:59.534949] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:57.832 [2024-11-04 07:25:59.534969] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22cfe20) on tqpair=0x2283510 00:19:57.832 [2024-11-04 07:25:59.534983] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:57.832 [2024-11-04 07:25:59.534988] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:57.832 [2024-11-04 07:25:59.534991] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2283510) 00:19:57.832 [2024-11-04 07:25:59.534999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.832 [2024-11-04 07:25:59.535029] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22cfe20, cid 4, qid 0 00:19:57.832 [2024-11-04 07:25:59.535112] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:57.832 [2024-11-04 07:25:59.535118] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:57.832 [2024-11-04 07:25:59.535122] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:57.832 [2024-11-04 07:25:59.535125] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2283510): datao=0, datal=3072, cccid=4 00:19:57.832 [2024-11-04 07:25:59.535129] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22cfe20) on tqpair(0x2283510): expected_datao=0, payload_size=3072 00:19:57.832 [2024-11-04 07:25:59.535136] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:57.832 [2024-11-04 07:25:59.535140] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:57.832 [2024-11-04 07:25:59.535147] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:57.832 [2024-11-04 07:25:59.535152] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:57.832 [2024-11-04 07:25:59.535155] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:57.832 [2024-11-04 07:25:59.535159] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22cfe20) on tqpair=0x2283510 00:19:57.832 [2024-11-04 07:25:59.535168] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:57.832 [2024-11-04 07:25:59.535172] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:57.832 [2024-11-04 07:25:59.535175] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2283510) 00:19:57.832 [2024-11-04 07:25:59.535197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.832 [2024-11-04 07:25:59.535232] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22cfe20, cid 4, qid 0 00:19:57.832 [2024-11-04 07:25:59.535330] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:57.832 [2024-11-04 07:25:59.535336] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:57.832 [2024-11-04 07:25:59.535339] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:57.832 [2024-11-04 07:25:59.535342] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2283510): datao=0, datal=8, cccid=4 00:19:57.832 [2024-11-04 07:25:59.535347] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22cfe20) on tqpair(0x2283510): expected_datao=0, payload_size=8 00:19:57.832 [2024-11-04 07:25:59.535353] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:57.832 [2024-11-04 07:25:59.535357] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:57.832 ===================================================== 00:19:57.832 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:57.832 ===================================================== 00:19:57.832 Controller Capabilities/Features 00:19:57.832 ================================ 00:19:57.832 Vendor ID: 0000 00:19:57.832 Subsystem Vendor ID: 0000 00:19:57.832 Serial Number: .................... 00:19:57.832 Model Number: ........................................ 00:19:57.832 Firmware Version: 24.01.1 00:19:57.832 Recommended Arb Burst: 0 00:19:57.832 IEEE OUI Identifier: 00 00 00 00:19:57.832 Multi-path I/O 00:19:57.832 May have multiple subsystem ports: No 00:19:57.832 May have multiple controllers: No 00:19:57.832 Associated with SR-IOV VF: No 00:19:57.832 Max Data Transfer Size: 131072 00:19:57.832 Max Number of Namespaces: 0 00:19:57.832 Max Number of I/O Queues: 1024 00:19:57.832 NVMe Specification Version (VS): 1.3 00:19:57.832 NVMe Specification Version (Identify): 1.3 00:19:57.832 Maximum Queue Entries: 128 00:19:57.832 Contiguous Queues Required: Yes 00:19:57.832 Arbitration Mechanisms Supported 00:19:57.832 Weighted Round Robin: Not Supported 00:19:57.832 Vendor Specific: Not Supported 00:19:57.832 Reset Timeout: 15000 ms 00:19:57.832 Doorbell Stride: 4 bytes 00:19:57.832 NVM Subsystem Reset: Not Supported 00:19:57.832 Command Sets Supported 00:19:57.832 NVM Command Set: Supported 00:19:57.832 Boot Partition: Not Supported 00:19:57.832 Memory Page Size Minimum: 4096 bytes 00:19:57.832 Memory Page Size Maximum: 4096 bytes 00:19:57.832 Persistent Memory Region: Not Supported 00:19:57.832 Optional Asynchronous Events Supported 00:19:57.832 Namespace Attribute Notices: Not Supported 00:19:57.832 Firmware Activation Notices: Not Supported 00:19:57.832 ANA Change Notices: Not Supported 00:19:57.832 PLE Aggregate Log Change Notices: Not Supported 00:19:57.832 LBA Status Info Alert Notices: Not Supported 00:19:57.832 EGE Aggregate Log Change Notices: Not Supported 00:19:57.832 Normal NVM Subsystem Shutdown event: Not Supported 00:19:57.832 Zone Descriptor Change Notices: Not Supported 00:19:57.832 Discovery Log Change Notices: Supported 00:19:57.832 Controller Attributes 00:19:57.832 128-bit Host Identifier: Not Supported 00:19:57.833 Non-Operational Permissive Mode: Not Supported 00:19:57.833 NVM Sets: Not Supported 00:19:57.833 Read Recovery Levels: Not Supported 00:19:57.833 Endurance Groups: Not Supported 00:19:57.833 Predictable Latency Mode: Not Supported 00:19:57.833 Traffic Based Keep ALive: Not Supported 00:19:57.833 Namespace Granularity: Not Supported 00:19:57.833 SQ Associations: Not Supported 00:19:57.833 UUID List: Not Supported 00:19:57.833 Multi-Domain Subsystem: Not Supported 00:19:57.833 Fixed Capacity Management: Not Supported 00:19:57.833 Variable Capacity Management: Not Supported 00:19:57.833 Delete Endurance Group: Not Supported 00:19:57.833 Delete NVM Set: Not Supported 00:19:57.833 Extended LBA Formats Supported: Not Supported 00:19:57.833 Flexible Data Placement Supported: Not Supported 00:19:57.833 00:19:57.833 Controller Memory Buffer Support 00:19:57.833 ================================ 00:19:57.833 Supported: No 00:19:57.833 00:19:57.833 Persistent Memory Region Support 00:19:57.833 ================================ 00:19:57.833 Supported: No 00:19:57.833 00:19:57.833 Admin Command Set Attributes 00:19:57.833 ============================ 00:19:57.833 Security Send/Receive: Not Supported 00:19:57.833 Format NVM: Not Supported 00:19:57.833 Firmware Activate/Download: Not Supported 00:19:57.833 Namespace Management: Not Supported 00:19:57.833 Device Self-Test: Not Supported 00:19:57.833 Directives: Not Supported 00:19:57.833 NVMe-MI: Not Supported 00:19:57.833 Virtualization Management: Not Supported 00:19:57.833 Doorbell Buffer Config: Not Supported 00:19:57.833 Get LBA Status Capability: Not Supported 00:19:57.833 Command & Feature Lockdown Capability: Not Supported 00:19:57.833 Abort Command Limit: 1 00:19:57.833 Async Event Request Limit: 4 00:19:57.833 Number of Firmware Slots: N/A 00:19:57.833 Firmware Slot 1 Read-Only: N/A 00:19:57.833 Fi[2024-11-04 07:25:59.576981] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:57.833 [2024-11-04 07:25:59.577001] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:57.833 [2024-11-04 07:25:59.577023] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:57.833 [2024-11-04 07:25:59.577027] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22cfe20) on tqpair=0x2283510 00:19:57.833 rmware Activation Without Reset: N/A 00:19:57.833 Multiple Update Detection Support: N/A 00:19:57.833 Firmware Update Granularity: No Information Provided 00:19:57.833 Per-Namespace SMART Log: No 00:19:57.833 Asymmetric Namespace Access Log Page: Not Supported 00:19:57.833 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:57.833 Command Effects Log Page: Not Supported 00:19:57.833 Get Log Page Extended Data: Supported 00:19:57.833 Telemetry Log Pages: Not Supported 00:19:57.833 Persistent Event Log Pages: Not Supported 00:19:57.833 Supported Log Pages Log Page: May Support 00:19:57.833 Commands Supported & Effects Log Page: Not Supported 00:19:57.833 Feature Identifiers & Effects Log Page:May Support 00:19:57.833 NVMe-MI Commands & Effects Log Page: May Support 00:19:57.833 Data Area 4 for Telemetry Log: Not Supported 00:19:57.833 Error Log Page Entries Supported: 128 00:19:57.833 Keep Alive: Not Supported 00:19:57.833 00:19:57.833 NVM Command Set Attributes 00:19:57.833 ========================== 00:19:57.833 Submission Queue Entry Size 00:19:57.833 Max: 1 00:19:57.833 Min: 1 00:19:57.833 Completion Queue Entry Size 00:19:57.833 Max: 1 00:19:57.833 Min: 1 00:19:57.833 Number of Namespaces: 0 00:19:57.833 Compare Command: Not Supported 00:19:57.833 Write Uncorrectable Command: Not Supported 00:19:57.833 Dataset Management Command: Not Supported 00:19:57.833 Write Zeroes Command: Not Supported 00:19:57.833 Set Features Save Field: Not Supported 00:19:57.833 Reservations: Not Supported 00:19:57.833 Timestamp: Not Supported 00:19:57.833 Copy: Not Supported 00:19:57.833 Volatile Write Cache: Not Present 00:19:57.833 Atomic Write Unit (Normal): 1 00:19:57.833 Atomic Write Unit (PFail): 1 00:19:57.833 Atomic Compare & Write Unit: 1 00:19:57.833 Fused Compare & Write: Supported 00:19:57.833 Scatter-Gather List 00:19:57.833 SGL Command Set: Supported 00:19:57.833 SGL Keyed: Supported 00:19:57.833 SGL Bit Bucket Descriptor: Not Supported 00:19:57.833 SGL Metadata Pointer: Not Supported 00:19:57.833 Oversized SGL: Not Supported 00:19:57.833 SGL Metadata Address: Not Supported 00:19:57.833 SGL Offset: Supported 00:19:57.833 Transport SGL Data Block: Not Supported 00:19:57.833 Replay Protected Memory Block: Not Supported 00:19:57.833 00:19:57.833 Firmware Slot Information 00:19:57.833 ========================= 00:19:57.833 Active slot: 0 00:19:57.833 00:19:57.833 00:19:57.833 Error Log 00:19:57.833 ========= 00:19:57.833 00:19:57.833 Active Namespaces 00:19:57.833 ================= 00:19:57.833 Discovery Log Page 00:19:57.833 ================== 00:19:57.833 Generation Counter: 2 00:19:57.833 Number of Records: 2 00:19:57.833 Record Format: 0 00:19:57.833 00:19:57.833 Discovery Log Entry 0 00:19:57.833 ---------------------- 00:19:57.833 Transport Type: 3 (TCP) 00:19:57.833 Address Family: 1 (IPv4) 00:19:57.833 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:57.833 Entry Flags: 00:19:57.833 Duplicate Returned Information: 1 00:19:57.833 Explicit Persistent Connection Support for Discovery: 1 00:19:57.833 Transport Requirements: 00:19:57.833 Secure Channel: Not Required 00:19:57.833 Port ID: 0 (0x0000) 00:19:57.833 Controller ID: 65535 (0xffff) 00:19:57.833 Admin Max SQ Size: 128 00:19:57.833 Transport Service Identifier: 4420 00:19:57.833 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:57.833 Transport Address: 10.0.0.2 00:19:57.833 Discovery Log Entry 1 00:19:57.833 ---------------------- 00:19:57.833 Transport Type: 3 (TCP) 00:19:57.833 Address Family: 1 (IPv4) 00:19:57.833 Subsystem Type: 2 (NVM Subsystem) 00:19:57.833 Entry Flags: 00:19:57.833 Duplicate Returned Information: 0 00:19:57.833 Explicit Persistent Connection Support for Discovery: 0 00:19:57.833 Transport Requirements: 00:19:57.833 Secure Channel: Not Required 00:19:57.833 Port ID: 0 (0x0000) 00:19:57.833 Controller ID: 65535 (0xffff) 00:19:57.833 Admin Max SQ Size: 128 00:19:57.833 Transport Service Identifier: 4420 00:19:57.833 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:19:57.833 Transport Address: 10.0.0.2 [2024-11-04 07:25:59.577113] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:19:57.833 [2024-11-04 07:25:59.577129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.833 [2024-11-04 07:25:59.577136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.833 [2024-11-04 07:25:59.577141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.833 [2024-11-04 07:25:59.577146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.833 [2024-11-04 07:25:59.577155] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:57.833 [2024-11-04 07:25:59.577159] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:57.833 [2024-11-04 07:25:59.577163] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2283510) 00:19:57.833 [2024-11-04 07:25:59.577170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.833 [2024-11-04 07:25:59.577195] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22cfcc0, cid 3, qid 0 00:19:57.833 [2024-11-04 07:25:59.577261] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:57.833 [2024-11-04 07:25:59.577267] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:57.833 [2024-11-04 07:25:59.577271] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:57.833 [2024-11-04 07:25:59.577274] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22cfcc0) on tqpair=0x2283510 00:19:57.833 [2024-11-04 07:25:59.577297] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:57.833 [2024-11-04 07:25:59.577301] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:57.833 [2024-11-04 07:25:59.577304] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2283510) 00:19:57.833 [2024-11-04 07:25:59.577311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.833 [2024-11-04 07:25:59.577333] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22cfcc0, cid 3, qid 0 00:19:57.833 [2024-11-04 07:25:59.577425] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:57.833 [2024-11-04 07:25:59.577431] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:57.833 [2024-11-04 07:25:59.577434] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:57.833 [2024-11-04 07:25:59.577438] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22cfcc0) on tqpair=0x2283510 00:19:57.833 [2024-11-04 07:25:59.577443] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:19:57.834 [2024-11-04 07:25:59.577448] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:19:57.834 [2024-11-04 07:25:59.577457] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:57.834 [2024-11-04 07:25:59.577461] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:57.834 [2024-11-04 07:25:59.577464] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2283510) 00:19:57.834 [2024-11-04 07:25:59.577471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.834 [2024-11-04 07:25:59.577488] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22cfcc0, cid 3, qid 0 00:19:57.834 [2024-11-04 07:25:59.577567] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:57.834 [2024-11-04 07:25:59.577573] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:57.834 [2024-11-04 07:25:59.577576] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:57.834 [2024-11-04 07:25:59.577580] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22cfcc0) on tqpair=0x2283510 00:19:57.834 [2024-11-04 07:25:59.577590] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:57.834 [2024-11-04 07:25:59.577595] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:57.834 [2024-11-04 07:25:59.577598] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2283510) 00:19:57.834 [2024-11-04 07:25:59.577605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.834 [2024-11-04 07:25:59.577622] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22cfcc0, cid 3, qid 0 00:19:57.834 [2024-11-04 07:25:59.577696] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:57.834 [2024-11-04 07:25:59.577703] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:57.834 [2024-11-04 07:25:59.577706] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:57.834 [2024-11-04 07:25:59.577709] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22cfcc0) on tqpair=0x2283510 00:19:57.834 [2024-11-04 07:25:59.577719] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:57.834 [2024-11-04 07:25:59.577723] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:57.834 [2024-11-04 07:25:59.577726] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2283510) 00:19:57.834 [2024-11-04 07:25:59.577733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.834 [2024-11-04 07:25:59.577749] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22cfcc0, cid 3, qid 0 00:19:57.834 [2024-11-04 07:25:59.577812] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:57.834 [2024-11-04 07:25:59.577818] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:57.834 [2024-11-04 07:25:59.577821] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:57.834 [2024-11-04 07:25:59.577825] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22cfcc0) on tqpair=0x2283510 00:19:57.834 [2024-11-04 07:25:59.577834] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:57.834 [2024-11-04 07:25:59.577838] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:57.834 [2024-11-04 07:25:59.577841] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2283510) 00:19:57.834 [2024-11-04 07:25:59.577848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.834 [2024-11-04 07:25:59.577879] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22cfcc0, cid 3, qid 0 00:19:57.834 [2024-11-04 07:25:59.581942] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:57.834 [2024-11-04 07:25:59.581958] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:57.834 [2024-11-04 07:25:59.581962] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:57.834 [2024-11-04 07:25:59.581966] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22cfcc0) on tqpair=0x2283510 00:19:57.834 [2024-11-04 07:25:59.581980] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:57.834 [2024-11-04 07:25:59.581985] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:57.834 [2024-11-04 07:25:59.581988] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2283510) 00:19:57.834 [2024-11-04 07:25:59.581996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.834 [2024-11-04 07:25:59.582020] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22cfcc0, cid 3, qid 0 00:19:57.834 [2024-11-04 07:25:59.582088] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:57.834 [2024-11-04 07:25:59.582094] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:57.834 [2024-11-04 07:25:59.582098] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:57.834 [2024-11-04 07:25:59.582101] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22cfcc0) on tqpair=0x2283510 00:19:57.834 [2024-11-04 07:25:59.582109] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:19:57.834 00:19:57.834 07:25:59 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:19:57.834 [2024-11-04 07:25:59.613899] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:57.834 [2024-11-04 07:25:59.613964] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93254 ] 00:19:58.097 [2024-11-04 07:25:59.751523] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:19:58.097 [2024-11-04 07:25:59.751587] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:58.097 [2024-11-04 07:25:59.751594] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:58.097 [2024-11-04 07:25:59.751602] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:58.097 [2024-11-04 07:25:59.751609] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:58.097 [2024-11-04 07:25:59.751700] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:19:58.097 [2024-11-04 07:25:59.751741] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1d6f510 0 00:19:58.097 [2024-11-04 07:25:59.758924] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:58.097 [2024-11-04 07:25:59.758945] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:58.097 [2024-11-04 07:25:59.758966] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:58.097 [2024-11-04 07:25:59.758970] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:58.097 [2024-11-04 07:25:59.759006] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.097 [2024-11-04 07:25:59.759012] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.097 [2024-11-04 07:25:59.759016] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d6f510) 00:19:58.097 [2024-11-04 07:25:59.759026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:58.097 [2024-11-04 07:25:59.759054] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbb8a0, cid 0, qid 0 00:19:58.097 [2024-11-04 07:25:59.766921] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.098 [2024-11-04 07:25:59.766940] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.098 [2024-11-04 07:25:59.766962] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.098 [2024-11-04 07:25:59.766966] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbb8a0) on tqpair=0x1d6f510 00:19:58.098 [2024-11-04 07:25:59.766975] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:58.098 [2024-11-04 07:25:59.766982] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:19:58.098 [2024-11-04 07:25:59.766987] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:19:58.098 [2024-11-04 07:25:59.767000] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.098 [2024-11-04 07:25:59.767005] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.098 [2024-11-04 07:25:59.767009] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d6f510) 00:19:58.098 [2024-11-04 07:25:59.767017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.098 [2024-11-04 07:25:59.767043] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbb8a0, cid 0, qid 0 00:19:58.098 [2024-11-04 07:25:59.767120] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.098 [2024-11-04 07:25:59.767126] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.098 [2024-11-04 07:25:59.767130] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.098 [2024-11-04 07:25:59.767133] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbb8a0) on tqpair=0x1d6f510 00:19:58.098 [2024-11-04 07:25:59.767139] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:19:58.098 [2024-11-04 07:25:59.767146] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:19:58.098 [2024-11-04 07:25:59.767153] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.098 [2024-11-04 07:25:59.767156] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.098 [2024-11-04 07:25:59.767159] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d6f510) 00:19:58.098 [2024-11-04 07:25:59.767166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.098 [2024-11-04 07:25:59.767199] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbb8a0, cid 0, qid 0 00:19:58.098 [2024-11-04 07:25:59.767294] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.098 [2024-11-04 07:25:59.767300] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.098 [2024-11-04 07:25:59.767304] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.098 [2024-11-04 07:25:59.767307] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbb8a0) on tqpair=0x1d6f510 00:19:58.098 [2024-11-04 07:25:59.767314] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:19:58.098 [2024-11-04 07:25:59.767321] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:19:58.098 [2024-11-04 07:25:59.767328] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.098 [2024-11-04 07:25:59.767332] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.098 [2024-11-04 07:25:59.767335] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d6f510) 00:19:58.098 [2024-11-04 07:25:59.767342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.098 [2024-11-04 07:25:59.767359] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbb8a0, cid 0, qid 0 00:19:58.098 [2024-11-04 07:25:59.767420] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.098 [2024-11-04 07:25:59.767426] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.098 [2024-11-04 07:25:59.767429] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.098 [2024-11-04 07:25:59.767433] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbb8a0) on tqpair=0x1d6f510 00:19:58.098 [2024-11-04 07:25:59.767439] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:58.098 [2024-11-04 07:25:59.767448] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.098 [2024-11-04 07:25:59.767453] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.098 [2024-11-04 07:25:59.767456] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d6f510) 00:19:58.098 [2024-11-04 07:25:59.767463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.098 [2024-11-04 07:25:59.767479] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbb8a0, cid 0, qid 0 00:19:58.098 [2024-11-04 07:25:59.767544] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.098 [2024-11-04 07:25:59.767550] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.098 [2024-11-04 07:25:59.767553] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.098 [2024-11-04 07:25:59.767557] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbb8a0) on tqpair=0x1d6f510 00:19:58.098 [2024-11-04 07:25:59.767562] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:19:58.098 [2024-11-04 07:25:59.767567] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:19:58.098 [2024-11-04 07:25:59.767574] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:58.098 [2024-11-04 07:25:59.767679] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:19:58.098 [2024-11-04 07:25:59.767683] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:58.098 [2024-11-04 07:25:59.767691] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.098 [2024-11-04 07:25:59.767695] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.098 [2024-11-04 07:25:59.767699] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d6f510) 00:19:58.098 [2024-11-04 07:25:59.767706] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.098 [2024-11-04 07:25:59.767723] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbb8a0, cid 0, qid 0 00:19:58.098 [2024-11-04 07:25:59.767795] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.098 [2024-11-04 07:25:59.767802] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.098 [2024-11-04 07:25:59.767805] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.098 [2024-11-04 07:25:59.767809] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbb8a0) on tqpair=0x1d6f510 00:19:58.098 [2024-11-04 07:25:59.767815] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:58.098 [2024-11-04 07:25:59.767824] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.098 [2024-11-04 07:25:59.767828] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.098 [2024-11-04 07:25:59.767832] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d6f510) 00:19:58.098 [2024-11-04 07:25:59.767838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.098 [2024-11-04 07:25:59.767855] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbb8a0, cid 0, qid 0 00:19:58.098 [2024-11-04 07:25:59.767961] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.098 [2024-11-04 07:25:59.767969] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.098 [2024-11-04 07:25:59.767972] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.098 [2024-11-04 07:25:59.767976] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbb8a0) on tqpair=0x1d6f510 00:19:58.098 [2024-11-04 07:25:59.767981] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:58.098 [2024-11-04 07:25:59.767986] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:19:58.098 [2024-11-04 07:25:59.767994] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:19:58.098 [2024-11-04 07:25:59.768007] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:19:58.098 [2024-11-04 07:25:59.768016] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.098 [2024-11-04 07:25:59.768019] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.098 [2024-11-04 07:25:59.768023] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d6f510) 00:19:58.098 [2024-11-04 07:25:59.768030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.098 [2024-11-04 07:25:59.768050] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbb8a0, cid 0, qid 0 00:19:58.098 [2024-11-04 07:25:59.768155] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.098 [2024-11-04 07:25:59.768161] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.098 [2024-11-04 07:25:59.768165] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.098 [2024-11-04 07:25:59.768168] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d6f510): datao=0, datal=4096, cccid=0 00:19:58.098 [2024-11-04 07:25:59.768173] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dbb8a0) on tqpair(0x1d6f510): expected_datao=0, payload_size=4096 00:19:58.098 [2024-11-04 07:25:59.768196] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.098 [2024-11-04 07:25:59.768200] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.098 [2024-11-04 07:25:59.768208] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.098 [2024-11-04 07:25:59.768213] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.098 [2024-11-04 07:25:59.768216] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.098 [2024-11-04 07:25:59.768220] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbb8a0) on tqpair=0x1d6f510 00:19:58.098 [2024-11-04 07:25:59.768237] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:19:58.098 [2024-11-04 07:25:59.768242] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:19:58.098 [2024-11-04 07:25:59.768246] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:19:58.098 [2024-11-04 07:25:59.768250] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:19:58.098 [2024-11-04 07:25:59.768254] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:19:58.098 [2024-11-04 07:25:59.768259] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:19:58.099 [2024-11-04 07:25:59.768271] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:19:58.099 [2024-11-04 07:25:59.768279] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.099 [2024-11-04 07:25:59.768283] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.099 [2024-11-04 07:25:59.768286] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d6f510) 00:19:58.099 [2024-11-04 07:25:59.768293] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:58.099 [2024-11-04 07:25:59.768312] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbb8a0, cid 0, qid 0 00:19:58.099 [2024-11-04 07:25:59.768386] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.099 [2024-11-04 07:25:59.768393] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.099 [2024-11-04 07:25:59.768396] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.099 [2024-11-04 07:25:59.768400] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbb8a0) on tqpair=0x1d6f510 00:19:58.099 [2024-11-04 07:25:59.768408] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.099 [2024-11-04 07:25:59.768412] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.099 [2024-11-04 07:25:59.768415] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d6f510) 00:19:58.099 [2024-11-04 07:25:59.768421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.099 [2024-11-04 07:25:59.768427] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.099 [2024-11-04 07:25:59.768431] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.099 [2024-11-04 07:25:59.768434] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1d6f510) 00:19:58.099 [2024-11-04 07:25:59.768440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.099 [2024-11-04 07:25:59.768445] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.099 [2024-11-04 07:25:59.768449] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.099 [2024-11-04 07:25:59.768452] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1d6f510) 00:19:58.099 [2024-11-04 07:25:59.768458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.099 [2024-11-04 07:25:59.768463] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.099 [2024-11-04 07:25:59.768467] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.099 [2024-11-04 07:25:59.768470] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d6f510) 00:19:58.099 [2024-11-04 07:25:59.768476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.099 [2024-11-04 07:25:59.768481] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:58.099 [2024-11-04 07:25:59.768492] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:58.099 [2024-11-04 07:25:59.768499] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.099 [2024-11-04 07:25:59.768503] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.099 [2024-11-04 07:25:59.768506] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d6f510) 00:19:58.099 [2024-11-04 07:25:59.768513] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.099 [2024-11-04 07:25:59.768533] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbb8a0, cid 0, qid 0 00:19:58.099 [2024-11-04 07:25:59.768540] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbba00, cid 1, qid 0 00:19:58.099 [2024-11-04 07:25:59.768544] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbb60, cid 2, qid 0 00:19:58.099 [2024-11-04 07:25:59.768548] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbcc0, cid 3, qid 0 00:19:58.099 [2024-11-04 07:25:59.768553] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbe20, cid 4, qid 0 00:19:58.099 [2024-11-04 07:25:59.768676] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.099 [2024-11-04 07:25:59.768684] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.099 [2024-11-04 07:25:59.768687] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.099 [2024-11-04 07:25:59.768691] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbe20) on tqpair=0x1d6f510 00:19:58.099 [2024-11-04 07:25:59.768697] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:19:58.099 [2024-11-04 07:25:59.768702] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:58.099 [2024-11-04 07:25:59.768711] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:19:58.099 [2024-11-04 07:25:59.768721] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:19:58.099 [2024-11-04 07:25:59.768727] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.099 [2024-11-04 07:25:59.768731] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.099 [2024-11-04 07:25:59.768735] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d6f510) 00:19:58.099 [2024-11-04 07:25:59.768742] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:58.099 [2024-11-04 07:25:59.768760] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbe20, cid 4, qid 0 00:19:58.099 [2024-11-04 07:25:59.768842] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.099 [2024-11-04 07:25:59.768858] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.099 [2024-11-04 07:25:59.768862] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.099 [2024-11-04 07:25:59.768866] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbe20) on tqpair=0x1d6f510 00:19:58.099 [2024-11-04 07:25:59.768963] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:19:58.099 [2024-11-04 07:25:59.768992] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:19:58.099 [2024-11-04 07:25:59.769001] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.099 [2024-11-04 07:25:59.769005] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.099 [2024-11-04 07:25:59.769009] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d6f510) 00:19:58.099 [2024-11-04 07:25:59.769016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.099 [2024-11-04 07:25:59.769037] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbe20, cid 4, qid 0 00:19:58.099 [2024-11-04 07:25:59.769128] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.099 [2024-11-04 07:25:59.769135] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.099 [2024-11-04 07:25:59.769139] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.099 [2024-11-04 07:25:59.769142] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d6f510): datao=0, datal=4096, cccid=4 00:19:58.099 [2024-11-04 07:25:59.769148] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dbbe20) on tqpair(0x1d6f510): expected_datao=0, payload_size=4096 00:19:58.099 [2024-11-04 07:25:59.769155] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.099 [2024-11-04 07:25:59.769159] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.099 [2024-11-04 07:25:59.769167] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.099 [2024-11-04 07:25:59.769173] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.099 [2024-11-04 07:25:59.769176] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.099 [2024-11-04 07:25:59.769180] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbe20) on tqpair=0x1d6f510 00:19:58.099 [2024-11-04 07:25:59.769197] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:19:58.099 [2024-11-04 07:25:59.769206] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:19:58.099 [2024-11-04 07:25:59.769216] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:19:58.099 [2024-11-04 07:25:59.769224] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.099 [2024-11-04 07:25:59.769227] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.099 [2024-11-04 07:25:59.769231] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d6f510) 00:19:58.099 [2024-11-04 07:25:59.769238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.099 [2024-11-04 07:25:59.769273] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbe20, cid 4, qid 0 00:19:58.099 [2024-11-04 07:25:59.769359] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.099 [2024-11-04 07:25:59.769366] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.099 [2024-11-04 07:25:59.769369] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.099 [2024-11-04 07:25:59.769372] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d6f510): datao=0, datal=4096, cccid=4 00:19:58.099 [2024-11-04 07:25:59.769377] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dbbe20) on tqpair(0x1d6f510): expected_datao=0, payload_size=4096 00:19:58.099 [2024-11-04 07:25:59.769384] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.099 [2024-11-04 07:25:59.769388] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.099 [2024-11-04 07:25:59.769395] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.099 [2024-11-04 07:25:59.769401] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.099 [2024-11-04 07:25:59.769404] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.099 [2024-11-04 07:25:59.769408] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbe20) on tqpair=0x1d6f510 00:19:58.099 [2024-11-04 07:25:59.769423] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:58.099 [2024-11-04 07:25:59.769434] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:58.099 [2024-11-04 07:25:59.769441] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.099 [2024-11-04 07:25:59.769445] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.099 [2024-11-04 07:25:59.769448] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d6f510) 00:19:58.099 [2024-11-04 07:25:59.769455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.099 [2024-11-04 07:25:59.769475] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbe20, cid 4, qid 0 00:19:58.099 [2024-11-04 07:25:59.769561] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.100 [2024-11-04 07:25:59.769567] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.100 [2024-11-04 07:25:59.769570] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.100 [2024-11-04 07:25:59.769574] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d6f510): datao=0, datal=4096, cccid=4 00:19:58.100 [2024-11-04 07:25:59.769578] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dbbe20) on tqpair(0x1d6f510): expected_datao=0, payload_size=4096 00:19:58.100 [2024-11-04 07:25:59.769585] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.100 [2024-11-04 07:25:59.769589] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.100 [2024-11-04 07:25:59.769597] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.100 [2024-11-04 07:25:59.769602] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.100 [2024-11-04 07:25:59.769606] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.100 [2024-11-04 07:25:59.769610] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbe20) on tqpair=0x1d6f510 00:19:58.100 [2024-11-04 07:25:59.769618] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:58.100 [2024-11-04 07:25:59.769627] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:19:58.100 [2024-11-04 07:25:59.769636] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:19:58.100 [2024-11-04 07:25:59.769643] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:58.100 [2024-11-04 07:25:59.769648] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:19:58.100 [2024-11-04 07:25:59.769653] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:19:58.100 [2024-11-04 07:25:59.769657] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:19:58.100 [2024-11-04 07:25:59.769663] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:19:58.100 [2024-11-04 07:25:59.769676] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.100 [2024-11-04 07:25:59.769681] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.100 [2024-11-04 07:25:59.769684] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d6f510) 00:19:58.100 [2024-11-04 07:25:59.769691] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.100 [2024-11-04 07:25:59.769698] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.100 [2024-11-04 07:25:59.769701] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.100 [2024-11-04 07:25:59.769705] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d6f510) 00:19:58.100 [2024-11-04 07:25:59.769725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.100 [2024-11-04 07:25:59.769748] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbe20, cid 4, qid 0 00:19:58.100 [2024-11-04 07:25:59.769756] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbf80, cid 5, qid 0 00:19:58.100 [2024-11-04 07:25:59.769853] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.100 [2024-11-04 07:25:59.769860] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.100 [2024-11-04 07:25:59.769863] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.100 [2024-11-04 07:25:59.769867] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbe20) on tqpair=0x1d6f510 00:19:58.100 [2024-11-04 07:25:59.769874] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.100 [2024-11-04 07:25:59.769880] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.100 [2024-11-04 07:25:59.769884] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.100 [2024-11-04 07:25:59.769887] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbf80) on tqpair=0x1d6f510 00:19:58.100 [2024-11-04 07:25:59.769897] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.100 [2024-11-04 07:25:59.769901] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.100 [2024-11-04 07:25:59.769905] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d6f510) 00:19:58.100 [2024-11-04 07:25:59.769911] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.100 [2024-11-04 07:25:59.769929] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbf80, cid 5, qid 0 00:19:58.100 [2024-11-04 07:25:59.770010] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.100 [2024-11-04 07:25:59.770017] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.100 [2024-11-04 07:25:59.770020] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.100 [2024-11-04 07:25:59.770024] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbf80) on tqpair=0x1d6f510 00:19:58.100 [2024-11-04 07:25:59.770034] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.100 [2024-11-04 07:25:59.770038] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.100 [2024-11-04 07:25:59.770042] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d6f510) 00:19:58.100 [2024-11-04 07:25:59.770048] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.100 [2024-11-04 07:25:59.770066] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbf80, cid 5, qid 0 00:19:58.100 [2024-11-04 07:25:59.770136] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.100 [2024-11-04 07:25:59.770143] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.100 [2024-11-04 07:25:59.770146] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.100 [2024-11-04 07:25:59.770150] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbf80) on tqpair=0x1d6f510 00:19:58.100 [2024-11-04 07:25:59.770160] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.100 [2024-11-04 07:25:59.770164] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.100 [2024-11-04 07:25:59.770167] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d6f510) 00:19:58.100 [2024-11-04 07:25:59.770174] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.100 [2024-11-04 07:25:59.770190] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbf80, cid 5, qid 0 00:19:58.100 [2024-11-04 07:25:59.770259] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.100 [2024-11-04 07:25:59.770266] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.100 [2024-11-04 07:25:59.770269] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.100 [2024-11-04 07:25:59.770273] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbf80) on tqpair=0x1d6f510 00:19:58.100 [2024-11-04 07:25:59.770285] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.100 [2024-11-04 07:25:59.770290] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.100 [2024-11-04 07:25:59.770293] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d6f510) 00:19:58.100 [2024-11-04 07:25:59.770300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.100 [2024-11-04 07:25:59.770307] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.100 [2024-11-04 07:25:59.770310] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.100 [2024-11-04 07:25:59.770314] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d6f510) 00:19:58.100 [2024-11-04 07:25:59.770319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.100 [2024-11-04 07:25:59.770326] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.100 [2024-11-04 07:25:59.770329] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.100 [2024-11-04 07:25:59.770333] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1d6f510) 00:19:58.100 [2024-11-04 07:25:59.770338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.100 [2024-11-04 07:25:59.770345] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.100 [2024-11-04 07:25:59.770349] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.100 [2024-11-04 07:25:59.770352] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1d6f510) 00:19:58.100 [2024-11-04 07:25:59.770359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.100 [2024-11-04 07:25:59.770377] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbf80, cid 5, qid 0 00:19:58.100 [2024-11-04 07:25:59.770384] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbe20, cid 4, qid 0 00:19:58.100 [2024-11-04 07:25:59.770388] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbc0e0, cid 6, qid 0 00:19:58.100 [2024-11-04 07:25:59.770393] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbc240, cid 7, qid 0 00:19:58.100 [2024-11-04 07:25:59.770589] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.100 [2024-11-04 07:25:59.770597] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.100 [2024-11-04 07:25:59.770601] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.100 [2024-11-04 07:25:59.770605] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d6f510): datao=0, datal=8192, cccid=5 00:19:58.100 [2024-11-04 07:25:59.770609] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dbbf80) on tqpair(0x1d6f510): expected_datao=0, payload_size=8192 00:19:58.100 [2024-11-04 07:25:59.770625] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.100 [2024-11-04 07:25:59.770629] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.100 [2024-11-04 07:25:59.770635] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.100 [2024-11-04 07:25:59.770641] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.100 [2024-11-04 07:25:59.770644] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.100 [2024-11-04 07:25:59.770647] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d6f510): datao=0, datal=512, cccid=4 00:19:58.100 [2024-11-04 07:25:59.770653] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dbbe20) on tqpair(0x1d6f510): expected_datao=0, payload_size=512 00:19:58.100 [2024-11-04 07:25:59.770659] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.100 [2024-11-04 07:25:59.770663] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.100 [2024-11-04 07:25:59.770668] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.100 ===================================================== 00:19:58.100 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:58.100 ===================================================== 00:19:58.100 Controller Capabilities/Features 00:19:58.100 ================================ 00:19:58.100 Vendor ID: 8086 00:19:58.100 Subsystem Vendor ID: 8086 00:19:58.100 Serial Number: SPDK00000000000001 00:19:58.100 Model Number: SPDK bdev Controller 00:19:58.101 Firmware Version: 24.01.1 00:19:58.101 Recommended Arb Burst: 6 00:19:58.101 IEEE OUI Identifier: e4 d2 5c 00:19:58.101 Multi-path I/O 00:19:58.101 May have multiple subsystem ports: Yes 00:19:58.101 May have multiple controllers: Yes 00:19:58.101 Associated with SR-IOV VF: No 00:19:58.101 Max Data Transfer Size: 131072 00:19:58.101 Max Number of Namespaces: 32 00:19:58.101 Max Number of I/O Queues: 127 00:19:58.101 NVMe Specification Version (VS): 1.3 00:19:58.101 NVMe Specification Version (Identify): 1.3 00:19:58.101 Maximum Queue Entries: 128 00:19:58.101 Contiguous Queues Required: Yes 00:19:58.101 Arbitration Mechanisms Supported 00:19:58.101 Weighted Round Robin: Not Supported 00:19:58.101 Vendor Specific: Not Supported 00:19:58.101 Reset Timeout: 15000 ms 00:19:58.101 Doorbell Stride: 4 bytes 00:19:58.101 NVM Subsystem Reset: Not Supported 00:19:58.101 Command Sets Supported 00:19:58.101 NVM Command Set: Supported 00:19:58.101 Boot Partition: Not Supported 00:19:58.101 Memory Page Size Minimum: 4096 bytes 00:19:58.101 Memory Page Size Maximum: 4096 bytes 00:19:58.101 Persistent Memory Region: Not Supported 00:19:58.101 Optional Asynchronous Events Supported 00:19:58.101 Namespace Attribute Notices: Supported 00:19:58.101 Firmware Activation Notices: Not Supported 00:19:58.101 ANA Change Notices: Not Supported 00:19:58.101 PLE Aggregate Log Change Notices: Not Supported 00:19:58.101 LBA Status Info Alert Notices: Not Supported 00:19:58.101 EGE Aggregate Log Change Notices: Not Supported 00:19:58.101 Normal NVM Subsystem Shutdown event: Not Supported 00:19:58.101 Zone Descriptor Change Notices: Not Supported 00:19:58.101 Discovery Log Change Notices: Not Supported 00:19:58.101 Controller Attributes 00:19:58.101 128-bit Host Identifier: Supported 00:19:58.101 Non-Operational Permissive Mode: Not Supported 00:19:58.101 NVM Sets: Not Supported 00:19:58.101 Read Recovery Levels: Not Supported 00:19:58.101 Endurance Groups: Not Supported 00:19:58.101 Predictable Latency Mode: Not Supported 00:19:58.101 Traffic Based Keep ALive: Not Supported 00:19:58.101 Namespace Granularity: Not Supported 00:19:58.101 SQ Associations: Not Supported 00:19:58.101 UUID List: Not Supported 00:19:58.101 Multi-Domain Subsystem: Not Supported 00:19:58.101 Fixed Capacity Management: Not Supported 00:19:58.101 Variable Capacity Management: Not Supported 00:19:58.101 Delete Endurance Group: Not Supported 00:19:58.101 Delete NVM Set: Not Supported 00:19:58.101 Extended LBA Formats Supported: Not Supported 00:19:58.101 Flexible Data Placement Supported: Not Supported 00:19:58.101 00:19:58.101 Controller Memory Buffer Support 00:19:58.101 ================================ 00:19:58.101 Supported: No 00:19:58.101 00:19:58.101 Persistent Memory Region Support 00:19:58.101 ================================ 00:19:58.101 Supported: No 00:19:58.101 00:19:58.101 Admin Command Set Attributes 00:19:58.101 ============================ 00:19:58.101 Security Send/Receive: Not Supported 00:19:58.101 Format NVM: Not Supported 00:19:58.101 Firmware Activate/Download: Not Supported 00:19:58.101 Namespace Management: Not Supported 00:19:58.101 Device Self-Test: Not Supported 00:19:58.101 Directives: Not Supported 00:19:58.101 NVMe-MI: Not Supported 00:19:58.101 Virtualization Management: Not Supported 00:19:58.101 Doorbell Buffer Config: Not Supported 00:19:58.101 Get LBA Status Capability: Not Supported 00:19:58.101 Command & Feature Lockdown Capability: Not Supported 00:19:58.101 Abort Command Limit: 4 00:19:58.101 Async Event Request Limit: 4 00:19:58.101 Number of Firmware Slots: N/A 00:19:58.101 Firmware Slot 1 Read-Only: N/A 00:19:58.101 Firmware Activation Without Reset: [2024-11-04 07:25:59.770674] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.101 [2024-11-04 07:25:59.770678] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.101 [2024-11-04 07:25:59.770681] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d6f510): datao=0, datal=512, cccid=6 00:19:58.101 [2024-11-04 07:25:59.770685] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dbc0e0) on tqpair(0x1d6f510): expected_datao=0, payload_size=512 00:19:58.101 [2024-11-04 07:25:59.770692] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.101 [2024-11-04 07:25:59.770695] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.101 [2024-11-04 07:25:59.770701] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.101 [2024-11-04 07:25:59.770706] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.101 [2024-11-04 07:25:59.770710] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.101 [2024-11-04 07:25:59.770713] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d6f510): datao=0, datal=4096, cccid=7 00:19:58.101 [2024-11-04 07:25:59.770717] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dbc240) on tqpair(0x1d6f510): expected_datao=0, payload_size=4096 00:19:58.101 [2024-11-04 07:25:59.770724] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.101 [2024-11-04 07:25:59.770728] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.101 [2024-11-04 07:25:59.770736] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.101 [2024-11-04 07:25:59.770741] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.101 [2024-11-04 07:25:59.770745] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.101 [2024-11-04 07:25:59.770748] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbf80) on tqpair=0x1d6f510 00:19:58.101 [2024-11-04 07:25:59.770766] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.101 [2024-11-04 07:25:59.770773] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.101 [2024-11-04 07:25:59.770776] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.101 [2024-11-04 07:25:59.770780] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbe20) on tqpair=0x1d6f510 00:19:58.101 [2024-11-04 07:25:59.770790] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.101 [2024-11-04 07:25:59.770795] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.101 [2024-11-04 07:25:59.770799] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.101 [2024-11-04 07:25:59.770802] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbc0e0) on tqpair=0x1d6f510 00:19:58.101 [2024-11-04 07:25:59.770810] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.101 [2024-11-04 07:25:59.770816] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.101 [2024-11-04 07:25:59.770819] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.101 [2024-11-04 07:25:59.770823] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbc240) on tqpair=0x1d6f510 00:19:58.101 N/A 00:19:58.101 Multiple Update Detection Support: N/A 00:19:58.101 Firmware Update Granularity: No Information Provided 00:19:58.101 Per-Namespace SMART Log: No 00:19:58.101 Asymmetric Namespace Access Log Page: Not Supported 00:19:58.101 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:19:58.101 Command Effects Log Page: Supported 00:19:58.101 Get Log Page Extended Data: Supported 00:19:58.101 Telemetry Log Pages: Not Supported 00:19:58.101 Persistent Event Log Pages: Not Supported 00:19:58.101 Supported Log Pages Log Page: May Support 00:19:58.101 Commands Supported & Effects Log Page: Not Supported 00:19:58.101 Feature Identifiers & Effects Log Page:May Support 00:19:58.101 NVMe-MI Commands & Effects Log Page: May Support 00:19:58.101 Data Area 4 for Telemetry Log: Not Supported 00:19:58.101 Error Log Page Entries Supported: 128 00:19:58.101 Keep Alive: Supported 00:19:58.101 Keep Alive Granularity: 10000 ms 00:19:58.101 00:19:58.101 NVM Command Set Attributes 00:19:58.101 ========================== 00:19:58.101 Submission Queue Entry Size 00:19:58.101 Max: 64 00:19:58.101 Min: 64 00:19:58.101 Completion Queue Entry Size 00:19:58.101 Max: 16 00:19:58.101 Min: 16 00:19:58.101 Number of Namespaces: 32 00:19:58.101 Compare Command: Supported 00:19:58.101 Write Uncorrectable Command: Not Supported 00:19:58.101 Dataset Management Command: Supported 00:19:58.101 Write Zeroes Command: Supported 00:19:58.101 Set Features Save Field: Not Supported 00:19:58.101 Reservations: Supported 00:19:58.101 Timestamp: Not Supported 00:19:58.101 Copy: Supported 00:19:58.101 Volatile Write Cache: Present 00:19:58.101 Atomic Write Unit (Normal): 1 00:19:58.101 Atomic Write Unit (PFail): 1 00:19:58.101 Atomic Compare & Write Unit: 1 00:19:58.101 Fused Compare & Write: Supported 00:19:58.101 Scatter-Gather List 00:19:58.101 SGL Command Set: Supported 00:19:58.101 SGL Keyed: Supported 00:19:58.101 SGL Bit Bucket Descriptor: Not Supported 00:19:58.101 SGL Metadata Pointer: Not Supported 00:19:58.101 Oversized SGL: Not Supported 00:19:58.101 SGL Metadata Address: Not Supported 00:19:58.101 SGL Offset: Supported 00:19:58.101 Transport SGL Data Block: Not Supported 00:19:58.101 Replay Protected Memory Block: Not Supported 00:19:58.101 00:19:58.101 Firmware Slot Information 00:19:58.101 ========================= 00:19:58.101 Active slot: 1 00:19:58.101 Slot 1 Firmware Revision: 24.01.1 00:19:58.101 00:19:58.101 00:19:58.101 Commands Supported and Effects 00:19:58.101 ============================== 00:19:58.101 Admin Commands 00:19:58.101 -------------- 00:19:58.101 Get Log Page (02h): Supported 00:19:58.101 Identify (06h): Supported 00:19:58.101 Abort (08h): Supported 00:19:58.101 Set Features (09h): Supported 00:19:58.102 Get Features (0Ah): Supported 00:19:58.102 Asynchronous Event Request (0Ch): Supported 00:19:58.102 Keep Alive (18h): Supported 00:19:58.102 I/O Commands 00:19:58.102 ------------ 00:19:58.102 Flush (00h): Supported LBA-Change 00:19:58.102 Write (01h): Supported LBA-Change 00:19:58.102 Read (02h): Supported 00:19:58.102 Compare (05h): Supported 00:19:58.102 Write Zeroes (08h): Supported LBA-Change 00:19:58.102 Dataset Management (09h): Supported LBA-Change 00:19:58.102 Copy (19h): Supported LBA-Change 00:19:58.102 Unknown (79h): Supported LBA-Change 00:19:58.102 Unknown (7Ah): Supported 00:19:58.102 00:19:58.102 Error Log 00:19:58.102 ========= 00:19:58.102 00:19:58.102 Arbitration 00:19:58.102 =========== 00:19:58.102 Arbitration Burst: 1 00:19:58.102 00:19:58.102 Power Management 00:19:58.102 ================ 00:19:58.102 Number of Power States: 1 00:19:58.102 Current Power State: Power State #0 00:19:58.102 Power State #0: 00:19:58.102 Max Power: 0.00 W 00:19:58.102 Non-Operational State: Operational 00:19:58.102 Entry Latency: Not Reported 00:19:58.102 Exit Latency: Not Reported 00:19:58.102 Relative Read Throughput: 0 00:19:58.102 Relative Read Latency: 0 00:19:58.102 Relative Write Throughput: 0 00:19:58.102 Relative Write Latency: 0 00:19:58.102 Idle Power: Not Reported 00:19:58.102 Active Power: Not Reported 00:19:58.102 Non-Operational Permissive Mode: Not Supported 00:19:58.102 00:19:58.102 Health Information 00:19:58.102 ================== 00:19:58.102 Critical Warnings: 00:19:58.102 Available Spare Space: OK 00:19:58.102 Temperature: OK 00:19:58.102 Device Reliability: OK 00:19:58.102 Read Only: No 00:19:58.102 Volatile Memory Backup: OK 00:19:58.102 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:58.102 Temperature Threshold: [2024-11-04 07:25:59.775004] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.102 [2024-11-04 07:25:59.775015] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.102 [2024-11-04 07:25:59.775018] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1d6f510) 00:19:58.102 [2024-11-04 07:25:59.775027] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.102 [2024-11-04 07:25:59.775053] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbc240, cid 7, qid 0 00:19:58.102 [2024-11-04 07:25:59.775127] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.102 [2024-11-04 07:25:59.775134] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.102 [2024-11-04 07:25:59.775138] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.102 [2024-11-04 07:25:59.775142] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbc240) on tqpair=0x1d6f510 00:19:58.102 [2024-11-04 07:25:59.775194] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:19:58.102 [2024-11-04 07:25:59.775211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.102 [2024-11-04 07:25:59.775218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.102 [2024-11-04 07:25:59.775223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.102 [2024-11-04 07:25:59.775229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.102 [2024-11-04 07:25:59.775254] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.102 [2024-11-04 07:25:59.775258] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.102 [2024-11-04 07:25:59.775261] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d6f510) 00:19:58.102 [2024-11-04 07:25:59.775269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.102 [2024-11-04 07:25:59.775294] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbcc0, cid 3, qid 0 00:19:58.102 [2024-11-04 07:25:59.775369] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.102 [2024-11-04 07:25:59.775376] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.102 [2024-11-04 07:25:59.775379] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.102 [2024-11-04 07:25:59.775383] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbcc0) on tqpair=0x1d6f510 00:19:58.102 [2024-11-04 07:25:59.775391] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.102 [2024-11-04 07:25:59.775395] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.102 [2024-11-04 07:25:59.775398] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d6f510) 00:19:58.102 [2024-11-04 07:25:59.775405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.102 [2024-11-04 07:25:59.775425] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbcc0, cid 3, qid 0 00:19:58.102 [2024-11-04 07:25:59.775512] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.102 [2024-11-04 07:25:59.775524] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.102 [2024-11-04 07:25:59.775528] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.102 [2024-11-04 07:25:59.775532] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbcc0) on tqpair=0x1d6f510 00:19:58.102 [2024-11-04 07:25:59.775537] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:19:58.102 [2024-11-04 07:25:59.775542] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:19:58.102 [2024-11-04 07:25:59.775551] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.102 [2024-11-04 07:25:59.775556] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.102 [2024-11-04 07:25:59.775559] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d6f510) 00:19:58.102 [2024-11-04 07:25:59.775566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.102 [2024-11-04 07:25:59.775589] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbcc0, cid 3, qid 0 00:19:58.102 [2024-11-04 07:25:59.775655] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.102 [2024-11-04 07:25:59.775661] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.102 [2024-11-04 07:25:59.775664] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.102 [2024-11-04 07:25:59.775668] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbcc0) on tqpair=0x1d6f510 00:19:58.102 [2024-11-04 07:25:59.775679] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.102 [2024-11-04 07:25:59.775683] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.102 [2024-11-04 07:25:59.775686] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d6f510) 00:19:58.102 [2024-11-04 07:25:59.775693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.102 [2024-11-04 07:25:59.775709] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbcc0, cid 3, qid 0 00:19:58.102 [2024-11-04 07:25:59.775791] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.102 [2024-11-04 07:25:59.775798] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.102 [2024-11-04 07:25:59.775801] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.102 [2024-11-04 07:25:59.775805] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbcc0) on tqpair=0x1d6f510 00:19:58.102 [2024-11-04 07:25:59.775815] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.102 [2024-11-04 07:25:59.775819] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.102 [2024-11-04 07:25:59.775823] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d6f510) 00:19:58.102 [2024-11-04 07:25:59.775829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.102 [2024-11-04 07:25:59.775845] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbcc0, cid 3, qid 0 00:19:58.102 [2024-11-04 07:25:59.775904] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.102 [2024-11-04 07:25:59.775912] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.102 [2024-11-04 07:25:59.775915] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.102 [2024-11-04 07:25:59.775919] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbcc0) on tqpair=0x1d6f510 00:19:58.102 [2024-11-04 07:25:59.775929] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.102 [2024-11-04 07:25:59.775933] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.102 [2024-11-04 07:25:59.775937] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d6f510) 00:19:58.102 [2024-11-04 07:25:59.775943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.102 [2024-11-04 07:25:59.775962] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbcc0, cid 3, qid 0 00:19:58.103 [2024-11-04 07:25:59.776060] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.103 [2024-11-04 07:25:59.776066] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.103 [2024-11-04 07:25:59.776070] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.103 [2024-11-04 07:25:59.776073] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbcc0) on tqpair=0x1d6f510 00:19:58.103 [2024-11-04 07:25:59.776083] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.103 [2024-11-04 07:25:59.776087] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.103 [2024-11-04 07:25:59.776090] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d6f510) 00:19:58.103 [2024-11-04 07:25:59.776097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.103 [2024-11-04 07:25:59.776113] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbcc0, cid 3, qid 0 00:19:58.103 [2024-11-04 07:25:59.776192] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.103 [2024-11-04 07:25:59.776199] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.103 [2024-11-04 07:25:59.776202] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.103 [2024-11-04 07:25:59.776206] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbcc0) on tqpair=0x1d6f510 00:19:58.103 [2024-11-04 07:25:59.776216] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.103 [2024-11-04 07:25:59.776220] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.103 [2024-11-04 07:25:59.776223] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d6f510) 00:19:58.103 [2024-11-04 07:25:59.776230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.103 [2024-11-04 07:25:59.776245] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbcc0, cid 3, qid 0 00:19:58.103 [2024-11-04 07:25:59.776310] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.103 [2024-11-04 07:25:59.776316] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.103 [2024-11-04 07:25:59.776319] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.103 [2024-11-04 07:25:59.776323] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbcc0) on tqpair=0x1d6f510 00:19:58.103 [2024-11-04 07:25:59.776333] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.103 [2024-11-04 07:25:59.776337] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.103 [2024-11-04 07:25:59.776340] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d6f510) 00:19:58.103 [2024-11-04 07:25:59.776347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.103 [2024-11-04 07:25:59.776363] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbcc0, cid 3, qid 0 00:19:58.103 [2024-11-04 07:25:59.776427] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.103 [2024-11-04 07:25:59.776433] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.103 [2024-11-04 07:25:59.776436] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.103 [2024-11-04 07:25:59.776440] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbcc0) on tqpair=0x1d6f510 00:19:58.103 [2024-11-04 07:25:59.776450] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.103 [2024-11-04 07:25:59.776454] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.103 [2024-11-04 07:25:59.776458] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d6f510) 00:19:58.103 [2024-11-04 07:25:59.776464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.103 [2024-11-04 07:25:59.776480] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbcc0, cid 3, qid 0 00:19:58.103 [2024-11-04 07:25:59.776551] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.103 [2024-11-04 07:25:59.776557] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.103 [2024-11-04 07:25:59.776560] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.103 [2024-11-04 07:25:59.776564] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbcc0) on tqpair=0x1d6f510 00:19:58.103 [2024-11-04 07:25:59.776574] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.103 [2024-11-04 07:25:59.776578] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.103 [2024-11-04 07:25:59.776582] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d6f510) 00:19:58.103 [2024-11-04 07:25:59.776588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.103 [2024-11-04 07:25:59.776604] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbcc0, cid 3, qid 0 00:19:58.103 [2024-11-04 07:25:59.776672] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.103 [2024-11-04 07:25:59.776679] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.103 [2024-11-04 07:25:59.776682] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.103 [2024-11-04 07:25:59.776685] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbcc0) on tqpair=0x1d6f510 00:19:58.103 [2024-11-04 07:25:59.776696] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.103 [2024-11-04 07:25:59.776700] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.103 [2024-11-04 07:25:59.776703] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d6f510) 00:19:58.103 [2024-11-04 07:25:59.776710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.103 [2024-11-04 07:25:59.776725] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbcc0, cid 3, qid 0 00:19:58.103 [2024-11-04 07:25:59.776796] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.103 [2024-11-04 07:25:59.776803] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.103 [2024-11-04 07:25:59.776806] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.103 [2024-11-04 07:25:59.776810] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbcc0) on tqpair=0x1d6f510 00:19:58.103 [2024-11-04 07:25:59.776820] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.103 [2024-11-04 07:25:59.776824] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.103 [2024-11-04 07:25:59.776827] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d6f510) 00:19:58.103 [2024-11-04 07:25:59.776834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.103 [2024-11-04 07:25:59.776850] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbcc0, cid 3, qid 0 00:19:58.103 [2024-11-04 07:25:59.776931] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.103 [2024-11-04 07:25:59.776942] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.103 [2024-11-04 07:25:59.776962] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.103 [2024-11-04 07:25:59.776966] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbcc0) on tqpair=0x1d6f510 00:19:58.103 [2024-11-04 07:25:59.776977] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.103 [2024-11-04 07:25:59.776982] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.103 [2024-11-04 07:25:59.776985] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d6f510) 00:19:58.103 [2024-11-04 07:25:59.776992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.103 [2024-11-04 07:25:59.777011] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbcc0, cid 3, qid 0 00:19:58.103 [2024-11-04 07:25:59.777074] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.103 [2024-11-04 07:25:59.777081] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.103 [2024-11-04 07:25:59.777084] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.103 [2024-11-04 07:25:59.777088] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbcc0) on tqpair=0x1d6f510 00:19:58.103 [2024-11-04 07:25:59.777098] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.103 [2024-11-04 07:25:59.777102] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.103 [2024-11-04 07:25:59.777106] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d6f510) 00:19:58.103 [2024-11-04 07:25:59.777119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.103 [2024-11-04 07:25:59.777135] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbcc0, cid 3, qid 0 00:19:58.103 [2024-11-04 07:25:59.777199] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.103 [2024-11-04 07:25:59.777206] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.103 [2024-11-04 07:25:59.777209] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.103 [2024-11-04 07:25:59.777213] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbcc0) on tqpair=0x1d6f510 00:19:58.103 [2024-11-04 07:25:59.777223] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.103 [2024-11-04 07:25:59.777227] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.103 [2024-11-04 07:25:59.777231] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d6f510) 00:19:58.103 [2024-11-04 07:25:59.777238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.103 [2024-11-04 07:25:59.777254] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbcc0, cid 3, qid 0 00:19:58.103 [2024-11-04 07:25:59.777343] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.103 [2024-11-04 07:25:59.777349] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.103 [2024-11-04 07:25:59.777352] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.103 [2024-11-04 07:25:59.777356] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbcc0) on tqpair=0x1d6f510 00:19:58.103 [2024-11-04 07:25:59.777366] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.103 [2024-11-04 07:25:59.777370] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.103 [2024-11-04 07:25:59.777374] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d6f510) 00:19:58.103 [2024-11-04 07:25:59.777380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.103 [2024-11-04 07:25:59.777396] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbcc0, cid 3, qid 0 00:19:58.103 [2024-11-04 07:25:59.777464] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.103 [2024-11-04 07:25:59.777475] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.103 [2024-11-04 07:25:59.777479] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.103 [2024-11-04 07:25:59.777483] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbcc0) on tqpair=0x1d6f510 00:19:58.103 [2024-11-04 07:25:59.777493] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.103 [2024-11-04 07:25:59.777498] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.103 [2024-11-04 07:25:59.777501] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d6f510) 00:19:58.103 [2024-11-04 07:25:59.777508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.103 [2024-11-04 07:25:59.777525] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbcc0, cid 3, qid 0 00:19:58.103 [2024-11-04 07:25:59.777580] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.104 [2024-11-04 07:25:59.777586] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.104 [2024-11-04 07:25:59.777590] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.104 [2024-11-04 07:25:59.777593] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbcc0) on tqpair=0x1d6f510 00:19:58.104 [2024-11-04 07:25:59.777604] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.104 [2024-11-04 07:25:59.777608] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.104 [2024-11-04 07:25:59.777611] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d6f510) 00:19:58.104 [2024-11-04 07:25:59.777618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.104 [2024-11-04 07:25:59.777634] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbcc0, cid 3, qid 0 00:19:58.104 [2024-11-04 07:25:59.777699] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.104 [2024-11-04 07:25:59.777706] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.104 [2024-11-04 07:25:59.777709] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.104 [2024-11-04 07:25:59.777713] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbcc0) on tqpair=0x1d6f510 00:19:58.104 [2024-11-04 07:25:59.777723] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.104 [2024-11-04 07:25:59.777727] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.104 [2024-11-04 07:25:59.777730] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d6f510) 00:19:58.104 [2024-11-04 07:25:59.777737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.104 [2024-11-04 07:25:59.777753] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbcc0, cid 3, qid 0 00:19:58.104 [2024-11-04 07:25:59.777821] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.104 [2024-11-04 07:25:59.777827] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.104 [2024-11-04 07:25:59.777831] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.104 [2024-11-04 07:25:59.777834] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbcc0) on tqpair=0x1d6f510 00:19:58.104 [2024-11-04 07:25:59.777844] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.104 [2024-11-04 07:25:59.777848] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.104 [2024-11-04 07:25:59.777852] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d6f510) 00:19:58.104 [2024-11-04 07:25:59.777858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.104 [2024-11-04 07:25:59.777884] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbcc0, cid 3, qid 0 00:19:58.104 [2024-11-04 07:25:59.777952] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.104 [2024-11-04 07:25:59.777959] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.104 [2024-11-04 07:25:59.777962] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.104 [2024-11-04 07:25:59.777966] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbcc0) on tqpair=0x1d6f510 00:19:58.104 [2024-11-04 07:25:59.777976] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.104 [2024-11-04 07:25:59.777980] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.104 [2024-11-04 07:25:59.777984] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d6f510) 00:19:58.104 [2024-11-04 07:25:59.777991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.104 [2024-11-04 07:25:59.778009] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbcc0, cid 3, qid 0 00:19:58.104 [2024-11-04 07:25:59.778070] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.104 [2024-11-04 07:25:59.778078] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.104 [2024-11-04 07:25:59.778081] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.104 [2024-11-04 07:25:59.778085] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbcc0) on tqpair=0x1d6f510 00:19:58.104 [2024-11-04 07:25:59.778095] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.104 [2024-11-04 07:25:59.778100] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.104 [2024-11-04 07:25:59.778103] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d6f510) 00:19:58.104 [2024-11-04 07:25:59.778110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.104 [2024-11-04 07:25:59.778126] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbcc0, cid 3, qid 0 00:19:58.104 [2024-11-04 07:25:59.778191] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.104 [2024-11-04 07:25:59.778197] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.104 [2024-11-04 07:25:59.778201] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.104 [2024-11-04 07:25:59.778204] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbcc0) on tqpair=0x1d6f510 00:19:58.104 [2024-11-04 07:25:59.778214] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.104 [2024-11-04 07:25:59.778218] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.104 [2024-11-04 07:25:59.778222] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d6f510) 00:19:58.104 [2024-11-04 07:25:59.778228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.104 [2024-11-04 07:25:59.778244] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbcc0, cid 3, qid 0 00:19:58.104 [2024-11-04 07:25:59.778304] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.104 [2024-11-04 07:25:59.778314] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.104 [2024-11-04 07:25:59.778318] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.104 [2024-11-04 07:25:59.778322] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbcc0) on tqpair=0x1d6f510 00:19:58.104 [2024-11-04 07:25:59.778333] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.104 [2024-11-04 07:25:59.778337] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.104 [2024-11-04 07:25:59.778341] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d6f510) 00:19:58.104 [2024-11-04 07:25:59.778348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.104 [2024-11-04 07:25:59.778365] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbcc0, cid 3, qid 0 00:19:58.104 [2024-11-04 07:25:59.778427] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.104 [2024-11-04 07:25:59.778433] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.104 [2024-11-04 07:25:59.778436] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.104 [2024-11-04 07:25:59.778465] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbcc0) on tqpair=0x1d6f510 00:19:58.104 [2024-11-04 07:25:59.778476] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.104 [2024-11-04 07:25:59.778481] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.104 [2024-11-04 07:25:59.778484] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d6f510) 00:19:58.104 [2024-11-04 07:25:59.778491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.104 [2024-11-04 07:25:59.778509] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbcc0, cid 3, qid 0 00:19:58.104 [2024-11-04 07:25:59.778576] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.104 [2024-11-04 07:25:59.778586] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.104 [2024-11-04 07:25:59.778591] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.104 [2024-11-04 07:25:59.778594] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbcc0) on tqpair=0x1d6f510 00:19:58.104 [2024-11-04 07:25:59.778606] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.104 [2024-11-04 07:25:59.778610] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.104 [2024-11-04 07:25:59.778614] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d6f510) 00:19:58.104 [2024-11-04 07:25:59.778621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.104 [2024-11-04 07:25:59.778638] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbcc0, cid 3, qid 0 00:19:58.104 [2024-11-04 07:25:59.778701] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.104 [2024-11-04 07:25:59.778707] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.104 [2024-11-04 07:25:59.778711] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.104 [2024-11-04 07:25:59.778714] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbcc0) on tqpair=0x1d6f510 00:19:58.104 [2024-11-04 07:25:59.778725] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.104 [2024-11-04 07:25:59.778729] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.104 [2024-11-04 07:25:59.778732] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d6f510) 00:19:58.104 [2024-11-04 07:25:59.778754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.104 [2024-11-04 07:25:59.778770] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbcc0, cid 3, qid 0 00:19:58.104 [2024-11-04 07:25:59.778836] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.104 [2024-11-04 07:25:59.778843] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.104 [2024-11-04 07:25:59.778846] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.104 [2024-11-04 07:25:59.778865] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbcc0) on tqpair=0x1d6f510 00:19:58.104 [2024-11-04 07:25:59.778890] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.104 [2024-11-04 07:25:59.778894] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.104 [2024-11-04 07:25:59.778897] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d6f510) 00:19:58.104 [2024-11-04 07:25:59.778904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.104 [2024-11-04 07:25:59.782915] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbcc0, cid 3, qid 0 00:19:58.104 [2024-11-04 07:25:59.782940] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.104 [2024-11-04 07:25:59.782948] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.104 [2024-11-04 07:25:59.782951] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.104 [2024-11-04 07:25:59.782955] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbcc0) on tqpair=0x1d6f510 00:19:58.104 [2024-11-04 07:25:59.782969] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.104 [2024-11-04 07:25:59.782973] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.104 [2024-11-04 07:25:59.782977] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d6f510) 00:19:58.104 [2024-11-04 07:25:59.782984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.104 [2024-11-04 07:25:59.783006] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbbcc0, cid 3, qid 0 00:19:58.104 [2024-11-04 07:25:59.783085] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.105 [2024-11-04 07:25:59.783091] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.105 [2024-11-04 07:25:59.783095] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.105 [2024-11-04 07:25:59.783098] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbbcc0) on tqpair=0x1d6f510 00:19:58.105 [2024-11-04 07:25:59.783106] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:19:58.105 0 Kelvin (-273 Celsius) 00:19:58.105 Available Spare: 0% 00:19:58.105 Available Spare Threshold: 0% 00:19:58.105 Life Percentage Used: 0% 00:19:58.105 Data Units Read: 0 00:19:58.105 Data Units Written: 0 00:19:58.105 Host Read Commands: 0 00:19:58.105 Host Write Commands: 0 00:19:58.105 Controller Busy Time: 0 minutes 00:19:58.105 Power Cycles: 0 00:19:58.105 Power On Hours: 0 hours 00:19:58.105 Unsafe Shutdowns: 0 00:19:58.105 Unrecoverable Media Errors: 0 00:19:58.105 Lifetime Error Log Entries: 0 00:19:58.105 Warning Temperature Time: 0 minutes 00:19:58.105 Critical Temperature Time: 0 minutes 00:19:58.105 00:19:58.105 Number of Queues 00:19:58.105 ================ 00:19:58.105 Number of I/O Submission Queues: 127 00:19:58.105 Number of I/O Completion Queues: 127 00:19:58.105 00:19:58.105 Active Namespaces 00:19:58.105 ================= 00:19:58.105 Namespace ID:1 00:19:58.105 Error Recovery Timeout: Unlimited 00:19:58.105 Command Set Identifier: NVM (00h) 00:19:58.105 Deallocate: Supported 00:19:58.105 Deallocated/Unwritten Error: Not Supported 00:19:58.105 Deallocated Read Value: Unknown 00:19:58.105 Deallocate in Write Zeroes: Not Supported 00:19:58.105 Deallocated Guard Field: 0xFFFF 00:19:58.105 Flush: Supported 00:19:58.105 Reservation: Supported 00:19:58.105 Namespace Sharing Capabilities: Multiple Controllers 00:19:58.105 Size (in LBAs): 131072 (0GiB) 00:19:58.105 Capacity (in LBAs): 131072 (0GiB) 00:19:58.105 Utilization (in LBAs): 131072 (0GiB) 00:19:58.105 NGUID: ABCDEF0123456789ABCDEF0123456789 00:19:58.105 EUI64: ABCDEF0123456789 00:19:58.105 UUID: 5b65e086-fd4d-4ab4-9649-420718a94f25 00:19:58.105 Thin Provisioning: Not Supported 00:19:58.105 Per-NS Atomic Units: Yes 00:19:58.105 Atomic Boundary Size (Normal): 0 00:19:58.105 Atomic Boundary Size (PFail): 0 00:19:58.105 Atomic Boundary Offset: 0 00:19:58.105 Maximum Single Source Range Length: 65535 00:19:58.105 Maximum Copy Length: 65535 00:19:58.105 Maximum Source Range Count: 1 00:19:58.105 NGUID/EUI64 Never Reused: No 00:19:58.105 Namespace Write Protected: No 00:19:58.105 Number of LBA Formats: 1 00:19:58.105 Current LBA Format: LBA Format #00 00:19:58.105 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:58.105 00:19:58.105 07:25:59 -- host/identify.sh@51 -- # sync 00:19:58.105 07:25:59 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:58.105 07:25:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:58.105 07:25:59 -- common/autotest_common.sh@10 -- # set +x 00:19:58.105 07:25:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:58.105 07:25:59 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:19:58.105 07:25:59 -- host/identify.sh@56 -- # nvmftestfini 00:19:58.105 07:25:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:58.105 07:25:59 -- nvmf/common.sh@116 -- # sync 00:19:58.105 07:25:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:58.105 07:25:59 -- nvmf/common.sh@119 -- # set +e 00:19:58.105 07:25:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:58.105 07:25:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:58.105 rmmod nvme_tcp 00:19:58.105 rmmod nvme_fabrics 00:19:58.105 rmmod nvme_keyring 00:19:58.105 07:25:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:58.364 07:25:59 -- nvmf/common.sh@123 -- # set -e 00:19:58.364 07:25:59 -- nvmf/common.sh@124 -- # return 0 00:19:58.364 07:25:59 -- nvmf/common.sh@477 -- # '[' -n 93189 ']' 00:19:58.364 07:25:59 -- nvmf/common.sh@478 -- # killprocess 93189 00:19:58.364 07:25:59 -- common/autotest_common.sh@926 -- # '[' -z 93189 ']' 00:19:58.364 07:25:59 -- common/autotest_common.sh@930 -- # kill -0 93189 00:19:58.364 07:25:59 -- common/autotest_common.sh@931 -- # uname 00:19:58.364 07:25:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:58.364 07:25:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 93189 00:19:58.364 07:25:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:58.364 07:25:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:58.364 killing process with pid 93189 00:19:58.364 07:25:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 93189' 00:19:58.364 07:25:59 -- common/autotest_common.sh@945 -- # kill 93189 00:19:58.364 [2024-11-04 07:25:59.978817] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:19:58.364 07:25:59 -- common/autotest_common.sh@950 -- # wait 93189 00:19:58.364 07:26:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:58.364 07:26:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:58.364 07:26:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:58.364 07:26:00 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:58.364 07:26:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:58.364 07:26:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.364 07:26:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:58.364 07:26:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:58.623 07:26:00 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:58.623 ************************************ 00:19:58.623 END TEST nvmf_identify 00:19:58.623 ************************************ 00:19:58.623 00:19:58.623 real 0m2.650s 00:19:58.623 user 0m7.727s 00:19:58.623 sys 0m0.691s 00:19:58.623 07:26:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:58.623 07:26:00 -- common/autotest_common.sh@10 -- # set +x 00:19:58.623 07:26:00 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:58.623 07:26:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:58.623 07:26:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:58.623 07:26:00 -- common/autotest_common.sh@10 -- # set +x 00:19:58.623 ************************************ 00:19:58.623 START TEST nvmf_perf 00:19:58.623 ************************************ 00:19:58.623 07:26:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:58.623 * Looking for test storage... 00:19:58.623 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:58.623 07:26:00 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:58.623 07:26:00 -- nvmf/common.sh@7 -- # uname -s 00:19:58.623 07:26:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:58.623 07:26:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:58.623 07:26:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:58.623 07:26:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:58.623 07:26:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:58.623 07:26:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:58.623 07:26:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:58.623 07:26:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:58.623 07:26:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:58.623 07:26:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:58.623 07:26:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:19:58.623 07:26:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:19:58.623 07:26:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:58.623 07:26:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:58.623 07:26:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:58.623 07:26:00 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:58.623 07:26:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:58.623 07:26:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:58.623 07:26:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:58.623 07:26:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.623 07:26:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.623 07:26:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.623 07:26:00 -- paths/export.sh@5 -- # export PATH 00:19:58.623 07:26:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.623 07:26:00 -- nvmf/common.sh@46 -- # : 0 00:19:58.623 07:26:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:58.623 07:26:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:58.623 07:26:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:58.623 07:26:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:58.623 07:26:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:58.623 07:26:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:58.623 07:26:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:58.623 07:26:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:58.623 07:26:00 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:58.623 07:26:00 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:58.623 07:26:00 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:58.623 07:26:00 -- host/perf.sh@17 -- # nvmftestinit 00:19:58.623 07:26:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:58.623 07:26:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:58.623 07:26:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:58.623 07:26:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:58.623 07:26:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:58.623 07:26:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.623 07:26:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:58.623 07:26:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:58.623 07:26:00 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:58.623 07:26:00 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:58.623 07:26:00 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:58.623 07:26:00 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:58.623 07:26:00 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:58.624 07:26:00 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:58.624 07:26:00 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:58.624 07:26:00 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:58.624 07:26:00 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:58.624 07:26:00 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:58.624 07:26:00 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:58.624 07:26:00 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:58.624 07:26:00 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:58.624 07:26:00 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:58.624 07:26:00 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:58.624 07:26:00 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:58.624 07:26:00 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:58.624 07:26:00 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:58.624 07:26:00 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:58.624 07:26:00 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:58.624 Cannot find device "nvmf_tgt_br" 00:19:58.624 07:26:00 -- nvmf/common.sh@154 -- # true 00:19:58.624 07:26:00 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:58.624 Cannot find device "nvmf_tgt_br2" 00:19:58.624 07:26:00 -- nvmf/common.sh@155 -- # true 00:19:58.624 07:26:00 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:58.624 07:26:00 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:58.624 Cannot find device "nvmf_tgt_br" 00:19:58.624 07:26:00 -- nvmf/common.sh@157 -- # true 00:19:58.624 07:26:00 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:58.882 Cannot find device "nvmf_tgt_br2" 00:19:58.882 07:26:00 -- nvmf/common.sh@158 -- # true 00:19:58.882 07:26:00 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:58.882 07:26:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:58.882 07:26:00 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:58.882 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:58.882 07:26:00 -- nvmf/common.sh@161 -- # true 00:19:58.882 07:26:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:58.882 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:58.882 07:26:00 -- nvmf/common.sh@162 -- # true 00:19:58.882 07:26:00 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:58.882 07:26:00 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:58.882 07:26:00 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:58.882 07:26:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:58.882 07:26:00 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:58.882 07:26:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:58.882 07:26:00 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:58.882 07:26:00 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:58.882 07:26:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:58.882 07:26:00 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:58.882 07:26:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:58.882 07:26:00 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:58.882 07:26:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:58.882 07:26:00 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:58.882 07:26:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:58.882 07:26:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:58.882 07:26:00 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:58.882 07:26:00 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:58.882 07:26:00 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:58.882 07:26:00 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:58.882 07:26:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:58.882 07:26:00 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:58.882 07:26:00 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:59.142 07:26:00 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:59.142 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:59.142 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:19:59.142 00:19:59.142 --- 10.0.0.2 ping statistics --- 00:19:59.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.142 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:19:59.142 07:26:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:59.142 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:59.142 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:19:59.142 00:19:59.142 --- 10.0.0.3 ping statistics --- 00:19:59.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.142 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:19:59.142 07:26:00 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:59.142 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:59.142 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:19:59.142 00:19:59.142 --- 10.0.0.1 ping statistics --- 00:19:59.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.142 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:19:59.142 07:26:00 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:59.142 07:26:00 -- nvmf/common.sh@421 -- # return 0 00:19:59.142 07:26:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:59.142 07:26:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:59.142 07:26:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:59.142 07:26:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:59.142 07:26:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:59.142 07:26:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:59.142 07:26:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:59.142 07:26:00 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:19:59.142 07:26:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:59.142 07:26:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:59.142 07:26:00 -- common/autotest_common.sh@10 -- # set +x 00:19:59.142 07:26:00 -- nvmf/common.sh@469 -- # nvmfpid=93416 00:19:59.142 07:26:00 -- nvmf/common.sh@470 -- # waitforlisten 93416 00:19:59.142 07:26:00 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:59.142 07:26:00 -- common/autotest_common.sh@819 -- # '[' -z 93416 ']' 00:19:59.142 07:26:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.142 07:26:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:59.142 07:26:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.142 07:26:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:59.142 07:26:00 -- common/autotest_common.sh@10 -- # set +x 00:19:59.142 [2024-11-04 07:26:00.822315] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:59.142 [2024-11-04 07:26:00.822416] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.142 [2024-11-04 07:26:00.958598] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:59.401 [2024-11-04 07:26:01.024479] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:59.401 [2024-11-04 07:26:01.024604] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.401 [2024-11-04 07:26:01.024616] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.401 [2024-11-04 07:26:01.024623] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.401 [2024-11-04 07:26:01.024787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:59.401 [2024-11-04 07:26:01.025290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:59.401 [2024-11-04 07:26:01.025915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:59.401 [2024-11-04 07:26:01.025924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.969 07:26:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:59.969 07:26:01 -- common/autotest_common.sh@852 -- # return 0 00:19:59.969 07:26:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:59.969 07:26:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:59.969 07:26:01 -- common/autotest_common.sh@10 -- # set +x 00:19:59.969 07:26:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:59.969 07:26:01 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:19:59.969 07:26:01 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:00.537 07:26:02 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:20:00.537 07:26:02 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:00.796 07:26:02 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:20:00.796 07:26:02 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:01.055 07:26:02 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:01.055 07:26:02 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:20:01.055 07:26:02 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:01.055 07:26:02 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:01.055 07:26:02 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:01.055 [2024-11-04 07:26:02.885952] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:01.313 07:26:02 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:01.572 07:26:03 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:01.572 07:26:03 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:01.572 07:26:03 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:01.572 07:26:03 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:02.140 07:26:03 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:02.140 [2024-11-04 07:26:03.956030] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:02.140 07:26:03 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:02.399 07:26:04 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:20:02.399 07:26:04 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:02.399 07:26:04 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:02.399 07:26:04 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:03.774 Initializing NVMe Controllers 00:20:03.774 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:20:03.774 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:20:03.774 Initialization complete. Launching workers. 00:20:03.774 ======================================================== 00:20:03.774 Latency(us) 00:20:03.774 Device Information : IOPS MiB/s Average min max 00:20:03.774 PCIE (0000:00:06.0) NSID 1 from core 0: 20735.00 81.00 1553.68 346.69 8606.27 00:20:03.774 ======================================================== 00:20:03.774 Total : 20735.00 81.00 1553.68 346.69 8606.27 00:20:03.774 00:20:03.774 07:26:05 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:05.151 Initializing NVMe Controllers 00:20:05.151 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:05.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:05.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:05.151 Initialization complete. Launching workers. 00:20:05.151 ======================================================== 00:20:05.151 Latency(us) 00:20:05.151 Device Information : IOPS MiB/s Average min max 00:20:05.151 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3317.99 12.96 301.11 114.11 6235.68 00:20:05.151 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.00 0.48 8194.94 6021.57 14098.07 00:20:05.151 ======================================================== 00:20:05.151 Total : 3440.99 13.44 583.27 114.11 14098.07 00:20:05.151 00:20:05.151 07:26:06 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:06.529 Initializing NVMe Controllers 00:20:06.529 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:06.529 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:06.529 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:06.529 Initialization complete. Launching workers. 00:20:06.529 ======================================================== 00:20:06.529 Latency(us) 00:20:06.529 Device Information : IOPS MiB/s Average min max 00:20:06.529 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10118.88 39.53 3164.26 537.71 7082.89 00:20:06.529 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2676.97 10.46 12035.83 6674.01 22824.09 00:20:06.529 ======================================================== 00:20:06.529 Total : 12795.85 49.98 5020.25 537.71 22824.09 00:20:06.529 00:20:06.529 07:26:08 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:20:06.529 07:26:08 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:09.063 Initializing NVMe Controllers 00:20:09.063 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:09.063 Controller IO queue size 128, less than required. 00:20:09.063 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:09.063 Controller IO queue size 128, less than required. 00:20:09.063 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:09.063 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:09.063 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:09.063 Initialization complete. Launching workers. 00:20:09.063 ======================================================== 00:20:09.063 Latency(us) 00:20:09.063 Device Information : IOPS MiB/s Average min max 00:20:09.063 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1460.03 365.01 89118.60 62358.39 151405.82 00:20:09.063 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 552.82 138.21 240795.43 95009.07 331612.49 00:20:09.063 ======================================================== 00:20:09.063 Total : 2012.85 503.21 130776.06 62358.39 331612.49 00:20:09.063 00:20:09.063 07:26:10 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:09.063 No valid NVMe controllers or AIO or URING devices found 00:20:09.063 Initializing NVMe Controllers 00:20:09.063 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:09.064 Controller IO queue size 128, less than required. 00:20:09.064 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:09.064 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:09.064 Controller IO queue size 128, less than required. 00:20:09.064 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:09.064 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:20:09.064 WARNING: Some requested NVMe devices were skipped 00:20:09.064 07:26:10 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:11.595 Initializing NVMe Controllers 00:20:11.595 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:11.595 Controller IO queue size 128, less than required. 00:20:11.595 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:11.595 Controller IO queue size 128, less than required. 00:20:11.595 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:11.595 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:11.595 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:11.595 Initialization complete. Launching workers. 00:20:11.595 00:20:11.595 ==================== 00:20:11.595 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:11.595 TCP transport: 00:20:11.595 polls: 8279 00:20:11.595 idle_polls: 5672 00:20:11.595 sock_completions: 2607 00:20:11.595 nvme_completions: 4220 00:20:11.595 submitted_requests: 6514 00:20:11.595 queued_requests: 1 00:20:11.595 00:20:11.595 ==================== 00:20:11.595 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:11.595 TCP transport: 00:20:11.595 polls: 11099 00:20:11.595 idle_polls: 8437 00:20:11.595 sock_completions: 2662 00:20:11.595 nvme_completions: 5206 00:20:11.595 submitted_requests: 7898 00:20:11.595 queued_requests: 1 00:20:11.595 ======================================================== 00:20:11.595 Latency(us) 00:20:11.596 Device Information : IOPS MiB/s Average min max 00:20:11.596 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1116.80 279.20 117517.39 71097.48 189053.57 00:20:11.596 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1362.92 340.73 94998.78 36147.70 137405.17 00:20:11.596 ======================================================== 00:20:11.596 Total : 2479.72 619.93 105140.54 36147.70 189053.57 00:20:11.596 00:20:11.596 07:26:13 -- host/perf.sh@66 -- # sync 00:20:11.596 07:26:13 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:11.854 07:26:13 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:20:11.854 07:26:13 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:20:11.854 07:26:13 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:20:12.422 07:26:13 -- host/perf.sh@72 -- # ls_guid=fe9ea908-4192-4558-ae44-38ddae59a462 00:20:12.422 07:26:13 -- host/perf.sh@73 -- # get_lvs_free_mb fe9ea908-4192-4558-ae44-38ddae59a462 00:20:12.422 07:26:13 -- common/autotest_common.sh@1343 -- # local lvs_uuid=fe9ea908-4192-4558-ae44-38ddae59a462 00:20:12.422 07:26:13 -- common/autotest_common.sh@1344 -- # local lvs_info 00:20:12.422 07:26:13 -- common/autotest_common.sh@1345 -- # local fc 00:20:12.422 07:26:13 -- common/autotest_common.sh@1346 -- # local cs 00:20:12.422 07:26:13 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:12.422 07:26:14 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:20:12.422 { 00:20:12.422 "base_bdev": "Nvme0n1", 00:20:12.422 "block_size": 4096, 00:20:12.422 "cluster_size": 4194304, 00:20:12.422 "free_clusters": 1278, 00:20:12.422 "name": "lvs_0", 00:20:12.422 "total_data_clusters": 1278, 00:20:12.422 "uuid": "fe9ea908-4192-4558-ae44-38ddae59a462" 00:20:12.422 } 00:20:12.422 ]' 00:20:12.422 07:26:14 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="fe9ea908-4192-4558-ae44-38ddae59a462") .free_clusters' 00:20:12.680 07:26:14 -- common/autotest_common.sh@1348 -- # fc=1278 00:20:12.680 07:26:14 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="fe9ea908-4192-4558-ae44-38ddae59a462") .cluster_size' 00:20:12.680 5112 00:20:12.680 07:26:14 -- common/autotest_common.sh@1349 -- # cs=4194304 00:20:12.680 07:26:14 -- common/autotest_common.sh@1352 -- # free_mb=5112 00:20:12.680 07:26:14 -- common/autotest_common.sh@1353 -- # echo 5112 00:20:12.680 07:26:14 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:20:12.680 07:26:14 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u fe9ea908-4192-4558-ae44-38ddae59a462 lbd_0 5112 00:20:12.939 07:26:14 -- host/perf.sh@80 -- # lb_guid=6279da74-06a2-463c-910c-cf2580c60b27 00:20:12.939 07:26:14 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 6279da74-06a2-463c-910c-cf2580c60b27 lvs_n_0 00:20:13.197 07:26:14 -- host/perf.sh@83 -- # ls_nested_guid=a27c2791-6689-4567-8c34-c0c389b55b96 00:20:13.197 07:26:14 -- host/perf.sh@84 -- # get_lvs_free_mb a27c2791-6689-4567-8c34-c0c389b55b96 00:20:13.197 07:26:14 -- common/autotest_common.sh@1343 -- # local lvs_uuid=a27c2791-6689-4567-8c34-c0c389b55b96 00:20:13.197 07:26:14 -- common/autotest_common.sh@1344 -- # local lvs_info 00:20:13.197 07:26:14 -- common/autotest_common.sh@1345 -- # local fc 00:20:13.197 07:26:14 -- common/autotest_common.sh@1346 -- # local cs 00:20:13.197 07:26:14 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:13.456 07:26:15 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:20:13.456 { 00:20:13.456 "base_bdev": "Nvme0n1", 00:20:13.456 "block_size": 4096, 00:20:13.456 "cluster_size": 4194304, 00:20:13.456 "free_clusters": 0, 00:20:13.456 "name": "lvs_0", 00:20:13.456 "total_data_clusters": 1278, 00:20:13.456 "uuid": "fe9ea908-4192-4558-ae44-38ddae59a462" 00:20:13.456 }, 00:20:13.456 { 00:20:13.456 "base_bdev": "6279da74-06a2-463c-910c-cf2580c60b27", 00:20:13.456 "block_size": 4096, 00:20:13.456 "cluster_size": 4194304, 00:20:13.456 "free_clusters": 1276, 00:20:13.456 "name": "lvs_n_0", 00:20:13.456 "total_data_clusters": 1276, 00:20:13.456 "uuid": "a27c2791-6689-4567-8c34-c0c389b55b96" 00:20:13.456 } 00:20:13.456 ]' 00:20:13.456 07:26:15 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="a27c2791-6689-4567-8c34-c0c389b55b96") .free_clusters' 00:20:13.456 07:26:15 -- common/autotest_common.sh@1348 -- # fc=1276 00:20:13.456 07:26:15 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="a27c2791-6689-4567-8c34-c0c389b55b96") .cluster_size' 00:20:13.456 07:26:15 -- common/autotest_common.sh@1349 -- # cs=4194304 00:20:13.456 07:26:15 -- common/autotest_common.sh@1352 -- # free_mb=5104 00:20:13.456 5104 00:20:13.456 07:26:15 -- common/autotest_common.sh@1353 -- # echo 5104 00:20:13.456 07:26:15 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:20:13.456 07:26:15 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a27c2791-6689-4567-8c34-c0c389b55b96 lbd_nest_0 5104 00:20:13.715 07:26:15 -- host/perf.sh@88 -- # lb_nested_guid=36f6433a-7ded-4f44-9ebc-8a5db24ff7a8 00:20:13.715 07:26:15 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:13.973 07:26:15 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:20:13.973 07:26:15 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 36f6433a-7ded-4f44-9ebc-8a5db24ff7a8 00:20:14.232 07:26:15 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:14.491 07:26:16 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:20:14.491 07:26:16 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:20:14.491 07:26:16 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:14.491 07:26:16 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:14.491 07:26:16 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:14.749 No valid NVMe controllers or AIO or URING devices found 00:20:14.749 Initializing NVMe Controllers 00:20:14.749 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:14.749 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:14.749 WARNING: Some requested NVMe devices were skipped 00:20:14.750 07:26:16 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:14.750 07:26:16 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:26.984 Initializing NVMe Controllers 00:20:26.984 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:26.984 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:26.984 Initialization complete. Launching workers. 00:20:26.984 ======================================================== 00:20:26.984 Latency(us) 00:20:26.984 Device Information : IOPS MiB/s Average min max 00:20:26.984 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 808.20 101.02 1236.89 407.06 7796.17 00:20:26.984 ======================================================== 00:20:26.984 Total : 808.20 101.02 1236.89 407.06 7796.17 00:20:26.984 00:20:26.984 07:26:26 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:26.984 07:26:26 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:26.984 07:26:26 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:26.984 No valid NVMe controllers or AIO or URING devices found 00:20:26.984 Initializing NVMe Controllers 00:20:26.984 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:26.984 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:26.984 WARNING: Some requested NVMe devices were skipped 00:20:26.984 07:26:26 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:26.984 07:26:26 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:36.960 Initializing NVMe Controllers 00:20:36.960 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:36.960 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:36.960 Initialization complete. Launching workers. 00:20:36.960 ======================================================== 00:20:36.960 Latency(us) 00:20:36.960 Device Information : IOPS MiB/s Average min max 00:20:36.960 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1091.56 136.44 29347.49 7306.85 231828.02 00:20:36.960 ======================================================== 00:20:36.960 Total : 1091.56 136.44 29347.49 7306.85 231828.02 00:20:36.960 00:20:36.960 07:26:37 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:36.960 07:26:37 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:36.960 07:26:37 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:36.960 No valid NVMe controllers or AIO or URING devices found 00:20:36.960 Initializing NVMe Controllers 00:20:36.960 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:36.960 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:36.960 WARNING: Some requested NVMe devices were skipped 00:20:36.960 07:26:37 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:36.960 07:26:37 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:46.938 Initializing NVMe Controllers 00:20:46.938 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:46.938 Controller IO queue size 128, less than required. 00:20:46.938 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:46.938 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:46.938 Initialization complete. Launching workers. 00:20:46.938 ======================================================== 00:20:46.938 Latency(us) 00:20:46.938 Device Information : IOPS MiB/s Average min max 00:20:46.938 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4174.50 521.81 30686.39 10713.52 56731.97 00:20:46.938 ======================================================== 00:20:46.938 Total : 4174.50 521.81 30686.39 10713.52 56731.97 00:20:46.938 00:20:46.938 07:26:47 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:46.938 07:26:48 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 36f6433a-7ded-4f44-9ebc-8a5db24ff7a8 00:20:46.938 07:26:48 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:20:46.938 07:26:48 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6279da74-06a2-463c-910c-cf2580c60b27 00:20:47.505 07:26:49 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:20:47.505 07:26:49 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:47.505 07:26:49 -- host/perf.sh@114 -- # nvmftestfini 00:20:47.505 07:26:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:47.505 07:26:49 -- nvmf/common.sh@116 -- # sync 00:20:47.505 07:26:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:47.506 07:26:49 -- nvmf/common.sh@119 -- # set +e 00:20:47.506 07:26:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:47.506 07:26:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:47.506 rmmod nvme_tcp 00:20:47.506 rmmod nvme_fabrics 00:20:47.506 rmmod nvme_keyring 00:20:47.764 07:26:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:47.764 07:26:49 -- nvmf/common.sh@123 -- # set -e 00:20:47.764 07:26:49 -- nvmf/common.sh@124 -- # return 0 00:20:47.764 07:26:49 -- nvmf/common.sh@477 -- # '[' -n 93416 ']' 00:20:47.764 07:26:49 -- nvmf/common.sh@478 -- # killprocess 93416 00:20:47.764 07:26:49 -- common/autotest_common.sh@926 -- # '[' -z 93416 ']' 00:20:47.764 07:26:49 -- common/autotest_common.sh@930 -- # kill -0 93416 00:20:47.764 07:26:49 -- common/autotest_common.sh@931 -- # uname 00:20:47.764 07:26:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:47.764 07:26:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 93416 00:20:47.764 killing process with pid 93416 00:20:47.764 07:26:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:47.764 07:26:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:47.764 07:26:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 93416' 00:20:47.764 07:26:49 -- common/autotest_common.sh@945 -- # kill 93416 00:20:47.764 07:26:49 -- common/autotest_common.sh@950 -- # wait 93416 00:20:49.140 07:26:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:49.140 07:26:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:49.140 07:26:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:49.140 07:26:50 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:49.140 07:26:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:49.140 07:26:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.140 07:26:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:49.140 07:26:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:49.140 07:26:50 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:49.140 00:20:49.140 real 0m50.448s 00:20:49.140 user 3m10.804s 00:20:49.140 sys 0m10.357s 00:20:49.140 07:26:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:49.140 07:26:50 -- common/autotest_common.sh@10 -- # set +x 00:20:49.140 ************************************ 00:20:49.140 END TEST nvmf_perf 00:20:49.140 ************************************ 00:20:49.140 07:26:50 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:49.140 07:26:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:49.140 07:26:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:49.140 07:26:50 -- common/autotest_common.sh@10 -- # set +x 00:20:49.140 ************************************ 00:20:49.140 START TEST nvmf_fio_host 00:20:49.140 ************************************ 00:20:49.140 07:26:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:49.140 * Looking for test storage... 00:20:49.140 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:49.140 07:26:50 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:49.140 07:26:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:49.140 07:26:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:49.140 07:26:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:49.140 07:26:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.140 07:26:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.140 07:26:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.140 07:26:50 -- paths/export.sh@5 -- # export PATH 00:20:49.140 07:26:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.140 07:26:50 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:49.140 07:26:50 -- nvmf/common.sh@7 -- # uname -s 00:20:49.140 07:26:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:49.140 07:26:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:49.140 07:26:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:49.140 07:26:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:49.140 07:26:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:49.140 07:26:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:49.140 07:26:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:49.140 07:26:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:49.140 07:26:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:49.140 07:26:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:49.140 07:26:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:20:49.140 07:26:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:20:49.140 07:26:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:49.140 07:26:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:49.140 07:26:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:49.140 07:26:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:49.140 07:26:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:49.140 07:26:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:49.140 07:26:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:49.141 07:26:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.141 07:26:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.141 07:26:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.141 07:26:50 -- paths/export.sh@5 -- # export PATH 00:20:49.141 07:26:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.141 07:26:50 -- nvmf/common.sh@46 -- # : 0 00:20:49.141 07:26:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:49.141 07:26:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:49.141 07:26:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:49.141 07:26:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:49.141 07:26:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:49.141 07:26:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:49.141 07:26:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:49.141 07:26:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:49.141 07:26:50 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:49.141 07:26:50 -- host/fio.sh@14 -- # nvmftestinit 00:20:49.141 07:26:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:49.141 07:26:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:49.141 07:26:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:49.141 07:26:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:49.141 07:26:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:49.141 07:26:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.141 07:26:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:49.141 07:26:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:49.141 07:26:50 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:49.141 07:26:50 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:49.141 07:26:50 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:49.141 07:26:50 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:49.141 07:26:50 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:49.141 07:26:50 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:49.141 07:26:50 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:49.141 07:26:50 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:49.141 07:26:50 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:49.141 07:26:50 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:49.141 07:26:50 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:49.141 07:26:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:49.141 07:26:50 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:49.141 07:26:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:49.141 07:26:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:49.141 07:26:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:49.141 07:26:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:49.141 07:26:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:49.141 07:26:50 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:49.141 07:26:50 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:49.141 Cannot find device "nvmf_tgt_br" 00:20:49.141 07:26:50 -- nvmf/common.sh@154 -- # true 00:20:49.141 07:26:50 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:49.141 Cannot find device "nvmf_tgt_br2" 00:20:49.141 07:26:50 -- nvmf/common.sh@155 -- # true 00:20:49.141 07:26:50 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:49.141 07:26:50 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:49.141 Cannot find device "nvmf_tgt_br" 00:20:49.141 07:26:50 -- nvmf/common.sh@157 -- # true 00:20:49.141 07:26:50 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:49.141 Cannot find device "nvmf_tgt_br2" 00:20:49.141 07:26:50 -- nvmf/common.sh@158 -- # true 00:20:49.141 07:26:50 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:49.399 07:26:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:49.399 07:26:51 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:49.399 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:49.399 07:26:51 -- nvmf/common.sh@161 -- # true 00:20:49.399 07:26:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:49.399 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:49.399 07:26:51 -- nvmf/common.sh@162 -- # true 00:20:49.400 07:26:51 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:49.400 07:26:51 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:49.400 07:26:51 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:49.400 07:26:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:49.400 07:26:51 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:49.400 07:26:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:49.400 07:26:51 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:49.400 07:26:51 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:49.400 07:26:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:49.400 07:26:51 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:49.400 07:26:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:49.400 07:26:51 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:49.400 07:26:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:49.400 07:26:51 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:49.400 07:26:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:49.400 07:26:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:49.400 07:26:51 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:49.400 07:26:51 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:49.400 07:26:51 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:49.400 07:26:51 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:49.400 07:26:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:49.400 07:26:51 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:49.400 07:26:51 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:49.400 07:26:51 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:49.400 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:49.400 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:20:49.400 00:20:49.400 --- 10.0.0.2 ping statistics --- 00:20:49.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.400 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:20:49.400 07:26:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:49.400 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:49.400 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:20:49.400 00:20:49.400 --- 10.0.0.3 ping statistics --- 00:20:49.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.400 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:20:49.400 07:26:51 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:49.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:49.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:20:49.400 00:20:49.400 --- 10.0.0.1 ping statistics --- 00:20:49.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.400 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:20:49.400 07:26:51 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:49.400 07:26:51 -- nvmf/common.sh@421 -- # return 0 00:20:49.400 07:26:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:49.400 07:26:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:49.400 07:26:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:49.400 07:26:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:49.400 07:26:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:49.400 07:26:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:49.400 07:26:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:49.660 07:26:51 -- host/fio.sh@16 -- # [[ y != y ]] 00:20:49.660 07:26:51 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:20:49.660 07:26:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:49.660 07:26:51 -- common/autotest_common.sh@10 -- # set +x 00:20:49.660 07:26:51 -- host/fio.sh@24 -- # nvmfpid=94379 00:20:49.660 07:26:51 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:49.660 07:26:51 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:49.660 07:26:51 -- host/fio.sh@28 -- # waitforlisten 94379 00:20:49.660 07:26:51 -- common/autotest_common.sh@819 -- # '[' -z 94379 ']' 00:20:49.660 07:26:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.660 07:26:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:49.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.660 07:26:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.660 07:26:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:49.660 07:26:51 -- common/autotest_common.sh@10 -- # set +x 00:20:49.660 [2024-11-04 07:26:51.304617] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:20:49.660 [2024-11-04 07:26:51.304727] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.660 [2024-11-04 07:26:51.439747] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:49.920 [2024-11-04 07:26:51.511883] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:49.920 [2024-11-04 07:26:51.512036] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:49.920 [2024-11-04 07:26:51.512051] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:49.920 [2024-11-04 07:26:51.512060] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:49.920 [2024-11-04 07:26:51.512399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:49.920 [2024-11-04 07:26:51.512541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:49.920 [2024-11-04 07:26:51.513013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:49.920 [2024-11-04 07:26:51.513085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.486 07:26:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:50.486 07:26:52 -- common/autotest_common.sh@852 -- # return 0 00:20:50.486 07:26:52 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:50.745 [2024-11-04 07:26:52.506297] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:50.745 07:26:52 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:20:50.745 07:26:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:50.745 07:26:52 -- common/autotest_common.sh@10 -- # set +x 00:20:50.745 07:26:52 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:51.312 Malloc1 00:20:51.312 07:26:52 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:51.312 07:26:53 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:51.570 07:26:53 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:51.829 [2024-11-04 07:26:53.501544] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:51.829 07:26:53 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:52.087 07:26:53 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:20:52.087 07:26:53 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:52.087 07:26:53 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:52.087 07:26:53 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:20:52.087 07:26:53 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:52.087 07:26:53 -- common/autotest_common.sh@1318 -- # local sanitizers 00:20:52.087 07:26:53 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:52.087 07:26:53 -- common/autotest_common.sh@1320 -- # shift 00:20:52.087 07:26:53 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:20:52.087 07:26:53 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:52.087 07:26:53 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:52.087 07:26:53 -- common/autotest_common.sh@1324 -- # grep libasan 00:20:52.087 07:26:53 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:52.087 07:26:53 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:52.087 07:26:53 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:52.087 07:26:53 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:52.087 07:26:53 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:52.087 07:26:53 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:20:52.087 07:26:53 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:52.087 07:26:53 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:52.087 07:26:53 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:52.087 07:26:53 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:52.087 07:26:53 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:52.346 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:52.346 fio-3.35 00:20:52.346 Starting 1 thread 00:20:54.880 00:20:54.880 test: (groupid=0, jobs=1): err= 0: pid=94505: Mon Nov 4 07:26:56 2024 00:20:54.880 read: IOPS=10.4k, BW=40.5MiB/s (42.5MB/s)(81.3MiB/2006msec) 00:20:54.880 slat (nsec): min=1697, max=308877, avg=2133.72, stdev=2968.69 00:20:54.880 clat (usec): min=3016, max=11129, avg=6532.61, stdev=576.59 00:20:54.880 lat (usec): min=3047, max=11131, avg=6534.74, stdev=576.46 00:20:54.880 clat percentiles (usec): 00:20:54.880 | 1.00th=[ 5407], 5.00th=[ 5669], 10.00th=[ 5866], 20.00th=[ 6063], 00:20:54.880 | 30.00th=[ 6259], 40.00th=[ 6390], 50.00th=[ 6521], 60.00th=[ 6652], 00:20:54.880 | 70.00th=[ 6783], 80.00th=[ 6980], 90.00th=[ 7242], 95.00th=[ 7504], 00:20:54.880 | 99.00th=[ 8160], 99.50th=[ 8356], 99.90th=[ 9503], 99.95th=[10290], 00:20:54.880 | 99.99th=[10552] 00:20:54.880 bw ( KiB/s): min=39968, max=42512, per=99.95%, avg=41470.00, stdev=1114.20, samples=4 00:20:54.880 iops : min= 9992, max=10628, avg=10367.50, stdev=278.55, samples=4 00:20:54.880 write: IOPS=10.4k, BW=40.5MiB/s (42.5MB/s)(81.3MiB/2006msec); 0 zone resets 00:20:54.880 slat (nsec): min=1766, max=275622, avg=2179.02, stdev=2388.65 00:20:54.880 clat (usec): min=2318, max=11085, avg=5749.99, stdev=488.65 00:20:54.880 lat (usec): min=2331, max=11087, avg=5752.16, stdev=488.60 00:20:54.880 clat percentiles (usec): 00:20:54.880 | 1.00th=[ 4752], 5.00th=[ 5014], 10.00th=[ 5211], 20.00th=[ 5407], 00:20:54.880 | 30.00th=[ 5538], 40.00th=[ 5604], 50.00th=[ 5735], 60.00th=[ 5800], 00:20:54.880 | 70.00th=[ 5932], 80.00th=[ 6063], 90.00th=[ 6325], 95.00th=[ 6521], 00:20:54.880 | 99.00th=[ 7111], 99.50th=[ 7439], 99.90th=[ 8848], 99.95th=[10290], 00:20:54.880 | 99.99th=[11076] 00:20:54.880 bw ( KiB/s): min=40560, max=41984, per=100.00%, avg=41522.00, stdev=655.04, samples=4 00:20:54.880 iops : min=10140, max=10496, avg=10380.50, stdev=163.76, samples=4 00:20:54.880 lat (msec) : 4=0.10%, 10=99.84%, 20=0.06% 00:20:54.880 cpu : usr=68.53%, sys=22.59%, ctx=13, majf=0, minf=5 00:20:54.880 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:54.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.880 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:54.880 issued rwts: total=20807,20816,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.880 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:54.880 00:20:54.880 Run status group 0 (all jobs): 00:20:54.880 READ: bw=40.5MiB/s (42.5MB/s), 40.5MiB/s-40.5MiB/s (42.5MB/s-42.5MB/s), io=81.3MiB (85.2MB), run=2006-2006msec 00:20:54.880 WRITE: bw=40.5MiB/s (42.5MB/s), 40.5MiB/s-40.5MiB/s (42.5MB/s-42.5MB/s), io=81.3MiB (85.3MB), run=2006-2006msec 00:20:54.880 07:26:56 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:54.880 07:26:56 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:54.880 07:26:56 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:20:54.880 07:26:56 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:54.880 07:26:56 -- common/autotest_common.sh@1318 -- # local sanitizers 00:20:54.880 07:26:56 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:54.880 07:26:56 -- common/autotest_common.sh@1320 -- # shift 00:20:54.880 07:26:56 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:20:54.880 07:26:56 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:54.880 07:26:56 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:54.880 07:26:56 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:54.880 07:26:56 -- common/autotest_common.sh@1324 -- # grep libasan 00:20:54.880 07:26:56 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:54.880 07:26:56 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:54.880 07:26:56 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:54.880 07:26:56 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:54.880 07:26:56 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:20:54.880 07:26:56 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:54.880 07:26:56 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:54.880 07:26:56 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:54.880 07:26:56 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:54.880 07:26:56 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:54.880 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:54.880 fio-3.35 00:20:54.880 Starting 1 thread 00:20:57.414 00:20:57.414 test: (groupid=0, jobs=1): err= 0: pid=94554: Mon Nov 4 07:26:58 2024 00:20:57.414 read: IOPS=8528, BW=133MiB/s (140MB/s)(268MiB/2008msec) 00:20:57.414 slat (usec): min=2, max=115, avg= 3.52, stdev= 2.33 00:20:57.414 clat (usec): min=1858, max=18314, avg=8861.03, stdev=2341.23 00:20:57.414 lat (usec): min=1861, max=18319, avg=8864.55, stdev=2341.45 00:20:57.414 clat percentiles (usec): 00:20:57.414 | 1.00th=[ 4490], 5.00th=[ 5407], 10.00th=[ 5997], 20.00th=[ 6652], 00:20:57.414 | 30.00th=[ 7373], 40.00th=[ 8029], 50.00th=[ 8848], 60.00th=[ 9503], 00:20:57.414 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11863], 95.00th=[12911], 00:20:57.414 | 99.00th=[15139], 99.50th=[15926], 99.90th=[16450], 99.95th=[16581], 00:20:57.414 | 99.99th=[16909] 00:20:57.414 bw ( KiB/s): min=65312, max=84064, per=52.28%, avg=71344.00, stdev=8725.84, samples=4 00:20:57.414 iops : min= 4082, max= 5254, avg=4459.00, stdev=545.36, samples=4 00:20:57.414 write: IOPS=5152, BW=80.5MiB/s (84.4MB/s)(146MiB/1812msec); 0 zone resets 00:20:57.414 slat (usec): min=29, max=359, avg=35.40, stdev=10.42 00:20:57.414 clat (usec): min=3762, max=17979, avg=10498.17, stdev=2078.04 00:20:57.414 lat (usec): min=3794, max=18026, avg=10533.57, stdev=2080.83 00:20:57.414 clat percentiles (usec): 00:20:57.414 | 1.00th=[ 6456], 5.00th=[ 7701], 10.00th=[ 8160], 20.00th=[ 8717], 00:20:57.414 | 30.00th=[ 9110], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10814], 00:20:57.414 | 70.00th=[11338], 80.00th=[12256], 90.00th=[13435], 95.00th=[14484], 00:20:57.414 | 99.00th=[15795], 99.50th=[16188], 99.90th=[17171], 99.95th=[17695], 00:20:57.414 | 99.99th=[17957] 00:20:57.414 bw ( KiB/s): min=67456, max=87904, per=90.28%, avg=74424.00, stdev=9240.94, samples=4 00:20:57.414 iops : min= 4216, max= 5494, avg=4651.50, stdev=577.56, samples=4 00:20:57.414 lat (msec) : 2=0.01%, 4=0.32%, 10=58.85%, 20=40.81% 00:20:57.414 cpu : usr=67.61%, sys=20.93%, ctx=19, majf=0, minf=1 00:20:57.414 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:20:57.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.414 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:57.414 issued rwts: total=17126,9336,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.414 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:57.414 00:20:57.414 Run status group 0 (all jobs): 00:20:57.414 READ: bw=133MiB/s (140MB/s), 133MiB/s-133MiB/s (140MB/s-140MB/s), io=268MiB (281MB), run=2008-2008msec 00:20:57.414 WRITE: bw=80.5MiB/s (84.4MB/s), 80.5MiB/s-80.5MiB/s (84.4MB/s-84.4MB/s), io=146MiB (153MB), run=1812-1812msec 00:20:57.414 07:26:58 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:57.414 07:26:59 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:20:57.414 07:26:59 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:20:57.414 07:26:59 -- host/fio.sh@51 -- # get_nvme_bdfs 00:20:57.414 07:26:59 -- common/autotest_common.sh@1498 -- # bdfs=() 00:20:57.414 07:26:59 -- common/autotest_common.sh@1498 -- # local bdfs 00:20:57.414 07:26:59 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:57.414 07:26:59 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:57.414 07:26:59 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:20:57.414 07:26:59 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:20:57.414 07:26:59 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:20:57.414 07:26:59 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:20:57.673 Nvme0n1 00:20:57.673 07:26:59 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:20:57.934 07:26:59 -- host/fio.sh@53 -- # ls_guid=32ba3dc4-9bfc-4ec5-8128-61a0e9d8ea68 00:20:57.934 07:26:59 -- host/fio.sh@54 -- # get_lvs_free_mb 32ba3dc4-9bfc-4ec5-8128-61a0e9d8ea68 00:20:57.934 07:26:59 -- common/autotest_common.sh@1343 -- # local lvs_uuid=32ba3dc4-9bfc-4ec5-8128-61a0e9d8ea68 00:20:57.934 07:26:59 -- common/autotest_common.sh@1344 -- # local lvs_info 00:20:57.934 07:26:59 -- common/autotest_common.sh@1345 -- # local fc 00:20:57.934 07:26:59 -- common/autotest_common.sh@1346 -- # local cs 00:20:57.934 07:26:59 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:58.214 07:26:59 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:20:58.214 { 00:20:58.214 "base_bdev": "Nvme0n1", 00:20:58.214 "block_size": 4096, 00:20:58.214 "cluster_size": 1073741824, 00:20:58.214 "free_clusters": 4, 00:20:58.214 "name": "lvs_0", 00:20:58.214 "total_data_clusters": 4, 00:20:58.214 "uuid": "32ba3dc4-9bfc-4ec5-8128-61a0e9d8ea68" 00:20:58.214 } 00:20:58.214 ]' 00:20:58.214 07:26:59 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="32ba3dc4-9bfc-4ec5-8128-61a0e9d8ea68") .free_clusters' 00:20:58.214 07:26:59 -- common/autotest_common.sh@1348 -- # fc=4 00:20:58.214 07:26:59 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="32ba3dc4-9bfc-4ec5-8128-61a0e9d8ea68") .cluster_size' 00:20:58.214 4096 00:20:58.214 07:26:59 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:20:58.214 07:26:59 -- common/autotest_common.sh@1352 -- # free_mb=4096 00:20:58.214 07:26:59 -- common/autotest_common.sh@1353 -- # echo 4096 00:20:58.214 07:26:59 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:20:58.478 41b0e268-429a-445e-bd92-ff4b07a2ff28 00:20:58.478 07:27:00 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:20:58.736 07:27:00 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:20:58.995 07:27:00 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:59.254 07:27:00 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:59.254 07:27:00 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:59.254 07:27:00 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:20:59.254 07:27:00 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:59.254 07:27:00 -- common/autotest_common.sh@1318 -- # local sanitizers 00:20:59.254 07:27:00 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:59.254 07:27:00 -- common/autotest_common.sh@1320 -- # shift 00:20:59.254 07:27:00 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:20:59.254 07:27:00 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:59.254 07:27:00 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:59.254 07:27:00 -- common/autotest_common.sh@1324 -- # grep libasan 00:20:59.254 07:27:00 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:59.254 07:27:00 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:59.254 07:27:00 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:59.254 07:27:00 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:59.254 07:27:00 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:59.254 07:27:00 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:20:59.254 07:27:00 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:59.254 07:27:00 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:59.254 07:27:00 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:59.254 07:27:00 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:59.254 07:27:00 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:59.254 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:59.254 fio-3.35 00:20:59.254 Starting 1 thread 00:21:01.788 00:21:01.789 test: (groupid=0, jobs=1): err= 0: pid=94705: Mon Nov 4 07:27:03 2024 00:21:01.789 read: IOPS=7674, BW=30.0MiB/s (31.4MB/s)(60.1MiB/2006msec) 00:21:01.789 slat (nsec): min=1717, max=390605, avg=2751.76, stdev=4706.87 00:21:01.789 clat (usec): min=3743, max=13890, avg=8966.13, stdev=889.93 00:21:01.789 lat (usec): min=3753, max=13892, avg=8968.89, stdev=889.79 00:21:01.789 clat percentiles (usec): 00:21:01.789 | 1.00th=[ 6980], 5.00th=[ 7570], 10.00th=[ 7832], 20.00th=[ 8225], 00:21:01.789 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9241], 00:21:01.789 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10159], 95.00th=[10421], 00:21:01.789 | 99.00th=[10945], 99.50th=[11207], 99.90th=[12125], 99.95th=[13042], 00:21:01.789 | 99.99th=[13304] 00:21:01.789 bw ( KiB/s): min=29032, max=31536, per=99.89%, avg=30666.00, stdev=1114.05, samples=4 00:21:01.789 iops : min= 7258, max= 7884, avg=7666.50, stdev=278.51, samples=4 00:21:01.789 write: IOPS=7657, BW=29.9MiB/s (31.4MB/s)(60.0MiB/2006msec); 0 zone resets 00:21:01.789 slat (nsec): min=1897, max=308347, avg=2880.56, stdev=3412.30 00:21:01.789 clat (usec): min=2740, max=13345, avg=7662.09, stdev=762.52 00:21:01.789 lat (usec): min=2754, max=13347, avg=7664.97, stdev=762.48 00:21:01.789 clat percentiles (usec): 00:21:01.789 | 1.00th=[ 5932], 5.00th=[ 6456], 10.00th=[ 6718], 20.00th=[ 7046], 00:21:01.789 | 30.00th=[ 7242], 40.00th=[ 7439], 50.00th=[ 7635], 60.00th=[ 7832], 00:21:01.789 | 70.00th=[ 8029], 80.00th=[ 8291], 90.00th=[ 8586], 95.00th=[ 8848], 00:21:01.789 | 99.00th=[ 9503], 99.50th=[ 9634], 99.90th=[11994], 99.95th=[12780], 00:21:01.789 | 99.99th=[13304] 00:21:01.789 bw ( KiB/s): min=29968, max=30960, per=99.88%, avg=30594.00, stdev=433.40, samples=4 00:21:01.789 iops : min= 7492, max= 7740, avg=7648.50, stdev=108.35, samples=4 00:21:01.789 lat (msec) : 4=0.05%, 10=93.70%, 20=6.25% 00:21:01.789 cpu : usr=66.18%, sys=24.24%, ctx=25, majf=0, minf=5 00:21:01.789 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:01.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:01.789 issued rwts: total=15396,15361,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:01.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:01.789 00:21:01.789 Run status group 0 (all jobs): 00:21:01.789 READ: bw=30.0MiB/s (31.4MB/s), 30.0MiB/s-30.0MiB/s (31.4MB/s-31.4MB/s), io=60.1MiB (63.1MB), run=2006-2006msec 00:21:01.789 WRITE: bw=29.9MiB/s (31.4MB/s), 29.9MiB/s-29.9MiB/s (31.4MB/s-31.4MB/s), io=60.0MiB (62.9MB), run=2006-2006msec 00:21:01.789 07:27:03 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:01.789 07:27:03 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:21:02.047 07:27:03 -- host/fio.sh@64 -- # ls_nested_guid=53c02f0d-1f73-4437-b8f7-d3247edbaa56 00:21:02.047 07:27:03 -- host/fio.sh@65 -- # get_lvs_free_mb 53c02f0d-1f73-4437-b8f7-d3247edbaa56 00:21:02.047 07:27:03 -- common/autotest_common.sh@1343 -- # local lvs_uuid=53c02f0d-1f73-4437-b8f7-d3247edbaa56 00:21:02.047 07:27:03 -- common/autotest_common.sh@1344 -- # local lvs_info 00:21:02.047 07:27:03 -- common/autotest_common.sh@1345 -- # local fc 00:21:02.047 07:27:03 -- common/autotest_common.sh@1346 -- # local cs 00:21:02.047 07:27:03 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:02.306 07:27:04 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:21:02.306 { 00:21:02.306 "base_bdev": "Nvme0n1", 00:21:02.306 "block_size": 4096, 00:21:02.306 "cluster_size": 1073741824, 00:21:02.306 "free_clusters": 0, 00:21:02.306 "name": "lvs_0", 00:21:02.306 "total_data_clusters": 4, 00:21:02.306 "uuid": "32ba3dc4-9bfc-4ec5-8128-61a0e9d8ea68" 00:21:02.306 }, 00:21:02.306 { 00:21:02.306 "base_bdev": "41b0e268-429a-445e-bd92-ff4b07a2ff28", 00:21:02.306 "block_size": 4096, 00:21:02.306 "cluster_size": 4194304, 00:21:02.306 "free_clusters": 1022, 00:21:02.306 "name": "lvs_n_0", 00:21:02.306 "total_data_clusters": 1022, 00:21:02.306 "uuid": "53c02f0d-1f73-4437-b8f7-d3247edbaa56" 00:21:02.306 } 00:21:02.306 ]' 00:21:02.306 07:27:04 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="53c02f0d-1f73-4437-b8f7-d3247edbaa56") .free_clusters' 00:21:02.565 07:27:04 -- common/autotest_common.sh@1348 -- # fc=1022 00:21:02.565 07:27:04 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="53c02f0d-1f73-4437-b8f7-d3247edbaa56") .cluster_size' 00:21:02.565 07:27:04 -- common/autotest_common.sh@1349 -- # cs=4194304 00:21:02.565 4088 00:21:02.565 07:27:04 -- common/autotest_common.sh@1352 -- # free_mb=4088 00:21:02.565 07:27:04 -- common/autotest_common.sh@1353 -- # echo 4088 00:21:02.565 07:27:04 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:21:02.823 435ff41c-a389-45d1-ab67-7a829d58690a 00:21:02.823 07:27:04 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:21:02.823 07:27:04 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:21:03.082 07:27:04 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:03.340 07:27:05 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:03.340 07:27:05 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:03.340 07:27:05 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:21:03.340 07:27:05 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:03.340 07:27:05 -- common/autotest_common.sh@1318 -- # local sanitizers 00:21:03.340 07:27:05 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:03.340 07:27:05 -- common/autotest_common.sh@1320 -- # shift 00:21:03.340 07:27:05 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:21:03.340 07:27:05 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:21:03.340 07:27:05 -- common/autotest_common.sh@1324 -- # grep libasan 00:21:03.341 07:27:05 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:03.341 07:27:05 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:21:03.341 07:27:05 -- common/autotest_common.sh@1324 -- # asan_lib= 00:21:03.341 07:27:05 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:21:03.341 07:27:05 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:21:03.341 07:27:05 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:03.341 07:27:05 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:21:03.341 07:27:05 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:21:03.341 07:27:05 -- common/autotest_common.sh@1324 -- # asan_lib= 00:21:03.341 07:27:05 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:21:03.341 07:27:05 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:03.341 07:27:05 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:03.599 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:03.599 fio-3.35 00:21:03.599 Starting 1 thread 00:21:06.238 00:21:06.238 test: (groupid=0, jobs=1): err= 0: pid=94826: Mon Nov 4 07:27:07 2024 00:21:06.238 read: IOPS=6643, BW=26.0MiB/s (27.2MB/s)(52.1MiB/2008msec) 00:21:06.238 slat (nsec): min=1851, max=341865, avg=2956.67, stdev=4880.89 00:21:06.238 clat (usec): min=4610, max=16392, avg=10354.19, stdev=1032.91 00:21:06.238 lat (usec): min=4659, max=16395, avg=10357.15, stdev=1032.80 00:21:06.238 clat percentiles (usec): 00:21:06.238 | 1.00th=[ 8029], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[ 9503], 00:21:06.238 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10290], 60.00th=[10552], 00:21:06.238 | 70.00th=[10814], 80.00th=[11207], 90.00th=[11731], 95.00th=[11994], 00:21:06.238 | 99.00th=[12780], 99.50th=[13042], 99.90th=[14877], 99.95th=[15795], 00:21:06.238 | 99.99th=[16319] 00:21:06.238 bw ( KiB/s): min=25176, max=27160, per=99.91%, avg=26552.00, stdev=925.26, samples=4 00:21:06.238 iops : min= 6294, max= 6790, avg=6638.00, stdev=231.32, samples=4 00:21:06.238 write: IOPS=6650, BW=26.0MiB/s (27.2MB/s)(52.2MiB/2008msec); 0 zone resets 00:21:06.238 slat (nsec): min=1981, max=306152, avg=3114.76, stdev=3772.76 00:21:06.238 clat (usec): min=2709, max=16417, avg=8825.58, stdev=874.89 00:21:06.238 lat (usec): min=2723, max=16419, avg=8828.69, stdev=874.89 00:21:06.238 clat percentiles (usec): 00:21:06.238 | 1.00th=[ 6849], 5.00th=[ 7504], 10.00th=[ 7767], 20.00th=[ 8094], 00:21:06.238 | 30.00th=[ 8356], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 8979], 00:21:06.238 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[ 9896], 95.00th=[10159], 00:21:06.238 | 99.00th=[10814], 99.50th=[10945], 99.90th=[14746], 99.95th=[15795], 00:21:06.238 | 99.99th=[16319] 00:21:06.238 bw ( KiB/s): min=26312, max=27048, per=99.93%, avg=26584.00, stdev=351.03, samples=4 00:21:06.238 iops : min= 6578, max= 6762, avg=6646.00, stdev=87.76, samples=4 00:21:06.238 lat (msec) : 4=0.03%, 10=64.79%, 20=35.18% 00:21:06.238 cpu : usr=69.31%, sys=21.72%, ctx=7, majf=0, minf=5 00:21:06.238 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:06.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.238 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:06.238 issued rwts: total=13341,13354,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.238 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.238 00:21:06.238 Run status group 0 (all jobs): 00:21:06.238 READ: bw=26.0MiB/s (27.2MB/s), 26.0MiB/s-26.0MiB/s (27.2MB/s-27.2MB/s), io=52.1MiB (54.6MB), run=2008-2008msec 00:21:06.238 WRITE: bw=26.0MiB/s (27.2MB/s), 26.0MiB/s-26.0MiB/s (27.2MB/s-27.2MB/s), io=52.2MiB (54.7MB), run=2008-2008msec 00:21:06.238 07:27:07 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:06.238 07:27:07 -- host/fio.sh@74 -- # sync 00:21:06.238 07:27:07 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:21:06.497 07:27:08 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:06.755 07:27:08 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:21:07.014 07:27:08 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:07.014 07:27:08 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:21:08.387 07:27:09 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:08.387 07:27:09 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:08.387 07:27:09 -- host/fio.sh@86 -- # nvmftestfini 00:21:08.387 07:27:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:08.387 07:27:09 -- nvmf/common.sh@116 -- # sync 00:21:08.387 07:27:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:08.387 07:27:09 -- nvmf/common.sh@119 -- # set +e 00:21:08.387 07:27:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:08.387 07:27:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:08.387 rmmod nvme_tcp 00:21:08.387 rmmod nvme_fabrics 00:21:08.387 rmmod nvme_keyring 00:21:08.387 07:27:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:08.387 07:27:09 -- nvmf/common.sh@123 -- # set -e 00:21:08.387 07:27:09 -- nvmf/common.sh@124 -- # return 0 00:21:08.387 07:27:09 -- nvmf/common.sh@477 -- # '[' -n 94379 ']' 00:21:08.387 07:27:09 -- nvmf/common.sh@478 -- # killprocess 94379 00:21:08.387 07:27:09 -- common/autotest_common.sh@926 -- # '[' -z 94379 ']' 00:21:08.387 07:27:09 -- common/autotest_common.sh@930 -- # kill -0 94379 00:21:08.387 07:27:09 -- common/autotest_common.sh@931 -- # uname 00:21:08.387 07:27:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:08.387 07:27:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 94379 00:21:08.387 07:27:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:08.387 killing process with pid 94379 00:21:08.387 07:27:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:08.387 07:27:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 94379' 00:21:08.387 07:27:09 -- common/autotest_common.sh@945 -- # kill 94379 00:21:08.387 07:27:09 -- common/autotest_common.sh@950 -- # wait 94379 00:21:08.387 07:27:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:08.387 07:27:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:08.387 07:27:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:08.387 07:27:10 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:08.387 07:27:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:08.387 07:27:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.387 07:27:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:08.387 07:27:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.646 07:27:10 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:08.646 00:21:08.646 real 0m19.443s 00:21:08.646 user 1m24.814s 00:21:08.646 sys 0m4.500s 00:21:08.646 07:27:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:08.646 07:27:10 -- common/autotest_common.sh@10 -- # set +x 00:21:08.646 ************************************ 00:21:08.646 END TEST nvmf_fio_host 00:21:08.646 ************************************ 00:21:08.646 07:27:10 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:08.646 07:27:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:08.646 07:27:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:08.646 07:27:10 -- common/autotest_common.sh@10 -- # set +x 00:21:08.646 ************************************ 00:21:08.646 START TEST nvmf_failover 00:21:08.646 ************************************ 00:21:08.646 07:27:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:08.646 * Looking for test storage... 00:21:08.646 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:08.646 07:27:10 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:08.646 07:27:10 -- nvmf/common.sh@7 -- # uname -s 00:21:08.646 07:27:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:08.646 07:27:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:08.646 07:27:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:08.646 07:27:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:08.646 07:27:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:08.646 07:27:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:08.646 07:27:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:08.646 07:27:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:08.646 07:27:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:08.646 07:27:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:08.646 07:27:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:21:08.646 07:27:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:21:08.646 07:27:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:08.646 07:27:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:08.646 07:27:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:08.646 07:27:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:08.646 07:27:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:08.646 07:27:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:08.646 07:27:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:08.646 07:27:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.646 07:27:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.646 07:27:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.646 07:27:10 -- paths/export.sh@5 -- # export PATH 00:21:08.646 07:27:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.646 07:27:10 -- nvmf/common.sh@46 -- # : 0 00:21:08.646 07:27:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:08.646 07:27:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:08.646 07:27:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:08.646 07:27:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:08.646 07:27:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:08.646 07:27:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:08.646 07:27:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:08.646 07:27:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:08.646 07:27:10 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:08.646 07:27:10 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:08.646 07:27:10 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:08.646 07:27:10 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:08.646 07:27:10 -- host/failover.sh@18 -- # nvmftestinit 00:21:08.646 07:27:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:08.646 07:27:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:08.646 07:27:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:08.646 07:27:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:08.646 07:27:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:08.646 07:27:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.646 07:27:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:08.646 07:27:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.646 07:27:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:08.646 07:27:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:08.646 07:27:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:08.646 07:27:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:08.646 07:27:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:08.646 07:27:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:08.646 07:27:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:08.646 07:27:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:08.646 07:27:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:08.646 07:27:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:08.646 07:27:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:08.646 07:27:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:08.646 07:27:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:08.646 07:27:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:08.646 07:27:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:08.646 07:27:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:08.646 07:27:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:08.646 07:27:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:08.646 07:27:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:08.646 07:27:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:08.646 Cannot find device "nvmf_tgt_br" 00:21:08.646 07:27:10 -- nvmf/common.sh@154 -- # true 00:21:08.646 07:27:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:08.646 Cannot find device "nvmf_tgt_br2" 00:21:08.646 07:27:10 -- nvmf/common.sh@155 -- # true 00:21:08.646 07:27:10 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:08.646 07:27:10 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:08.646 Cannot find device "nvmf_tgt_br" 00:21:08.646 07:27:10 -- nvmf/common.sh@157 -- # true 00:21:08.646 07:27:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:08.646 Cannot find device "nvmf_tgt_br2" 00:21:08.646 07:27:10 -- nvmf/common.sh@158 -- # true 00:21:08.646 07:27:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:08.646 07:27:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:08.906 07:27:10 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:08.906 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:08.906 07:27:10 -- nvmf/common.sh@161 -- # true 00:21:08.906 07:27:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:08.906 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:08.906 07:27:10 -- nvmf/common.sh@162 -- # true 00:21:08.906 07:27:10 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:08.906 07:27:10 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:08.906 07:27:10 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:08.906 07:27:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:08.906 07:27:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:08.906 07:27:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:08.906 07:27:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:08.906 07:27:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:08.906 07:27:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:08.906 07:27:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:08.906 07:27:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:08.906 07:27:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:08.906 07:27:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:08.906 07:27:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:08.906 07:27:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:08.906 07:27:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:08.906 07:27:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:08.906 07:27:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:08.906 07:27:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:08.906 07:27:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:08.906 07:27:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:08.906 07:27:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:08.906 07:27:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:08.906 07:27:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:08.906 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:08.906 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:21:08.906 00:21:08.906 --- 10.0.0.2 ping statistics --- 00:21:08.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.906 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:21:08.906 07:27:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:08.906 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:08.906 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:21:08.906 00:21:08.906 --- 10.0.0.3 ping statistics --- 00:21:08.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.906 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:21:08.906 07:27:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:08.906 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:08.906 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:21:08.906 00:21:08.906 --- 10.0.0.1 ping statistics --- 00:21:08.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.906 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:21:08.906 07:27:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:08.906 07:27:10 -- nvmf/common.sh@421 -- # return 0 00:21:08.906 07:27:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:08.906 07:27:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:08.906 07:27:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:08.906 07:27:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:08.906 07:27:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:08.906 07:27:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:08.906 07:27:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:08.906 07:27:10 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:08.906 07:27:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:08.906 07:27:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:08.906 07:27:10 -- common/autotest_common.sh@10 -- # set +x 00:21:08.906 07:27:10 -- nvmf/common.sh@469 -- # nvmfpid=95098 00:21:08.906 07:27:10 -- nvmf/common.sh@470 -- # waitforlisten 95098 00:21:08.906 07:27:10 -- common/autotest_common.sh@819 -- # '[' -z 95098 ']' 00:21:08.906 07:27:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:08.906 07:27:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.906 07:27:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:08.906 07:27:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.906 07:27:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:08.906 07:27:10 -- common/autotest_common.sh@10 -- # set +x 00:21:09.165 [2024-11-04 07:27:10.756506] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:21:09.165 [2024-11-04 07:27:10.756565] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:09.165 [2024-11-04 07:27:10.885697] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:09.165 [2024-11-04 07:27:10.957148] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:09.165 [2024-11-04 07:27:10.957769] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:09.165 [2024-11-04 07:27:10.958012] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:09.165 [2024-11-04 07:27:10.958279] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:09.165 [2024-11-04 07:27:10.958703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:09.165 [2024-11-04 07:27:10.958924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:09.165 [2024-11-04 07:27:10.958790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:10.102 07:27:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:10.102 07:27:11 -- common/autotest_common.sh@852 -- # return 0 00:21:10.102 07:27:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:10.102 07:27:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:10.102 07:27:11 -- common/autotest_common.sh@10 -- # set +x 00:21:10.102 07:27:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:10.102 07:27:11 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:10.102 [2024-11-04 07:27:11.925724] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:10.361 07:27:11 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:10.361 Malloc0 00:21:10.619 07:27:12 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:10.619 07:27:12 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:10.878 07:27:12 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:11.136 [2024-11-04 07:27:12.826479] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.136 07:27:12 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:11.395 [2024-11-04 07:27:13.042690] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:11.395 07:27:13 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:11.654 [2024-11-04 07:27:13.255118] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:11.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:11.654 07:27:13 -- host/failover.sh@31 -- # bdevperf_pid=95210 00:21:11.654 07:27:13 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:11.654 07:27:13 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:11.654 07:27:13 -- host/failover.sh@34 -- # waitforlisten 95210 /var/tmp/bdevperf.sock 00:21:11.654 07:27:13 -- common/autotest_common.sh@819 -- # '[' -z 95210 ']' 00:21:11.654 07:27:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:11.654 07:27:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:11.654 07:27:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:11.654 07:27:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:11.654 07:27:13 -- common/autotest_common.sh@10 -- # set +x 00:21:12.590 07:27:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:12.590 07:27:14 -- common/autotest_common.sh@852 -- # return 0 00:21:12.590 07:27:14 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:13.158 NVMe0n1 00:21:13.158 07:27:14 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:13.158 00:21:13.158 07:27:14 -- host/failover.sh@39 -- # run_test_pid=95262 00:21:13.158 07:27:14 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:13.158 07:27:14 -- host/failover.sh@41 -- # sleep 1 00:21:14.535 07:27:15 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:14.535 [2024-11-04 07:27:16.277690] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.535 [2024-11-04 07:27:16.277739] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.535 [2024-11-04 07:27:16.277762] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.535 [2024-11-04 07:27:16.277769] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.535 [2024-11-04 07:27:16.277776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.535 [2024-11-04 07:27:16.277783] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.535 [2024-11-04 07:27:16.277790] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.535 [2024-11-04 07:27:16.277796] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.535 [2024-11-04 07:27:16.277803] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.535 [2024-11-04 07:27:16.277810] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.535 [2024-11-04 07:27:16.277816] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.277822] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.277829] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.277835] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.277850] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.277857] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.277901] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.277926] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.277936] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.277944] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.277952] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.277960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.277968] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.277976] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.277984] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.277993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.278002] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.278010] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.278017] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.278025] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.278034] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.278042] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.278051] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.278059] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.278067] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.278074] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.278082] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.278090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.278098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.278106] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.278113] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.278121] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.278129] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.278136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.278144] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.278151] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.278158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.278165] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.278172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.278181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.278189] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.278196] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.278204] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.278212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.278220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.278227] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 [2024-11-04 07:27:16.278235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1c90 is same with the state(5) to be set 00:21:14.536 07:27:16 -- host/failover.sh@45 -- # sleep 3 00:21:17.822 07:27:19 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:17.822 00:21:17.822 07:27:19 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:18.088 [2024-11-04 07:27:19.883309] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.088 [2024-11-04 07:27:19.883362] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.088 [2024-11-04 07:27:19.883371] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.088 [2024-11-04 07:27:19.883379] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.088 [2024-11-04 07:27:19.883386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.088 [2024-11-04 07:27:19.883393] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.088 [2024-11-04 07:27:19.883400] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.088 [2024-11-04 07:27:19.883407] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.088 [2024-11-04 07:27:19.883414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.088 [2024-11-04 07:27:19.883420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.088 [2024-11-04 07:27:19.883427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.088 [2024-11-04 07:27:19.883434] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.088 [2024-11-04 07:27:19.883440] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.088 [2024-11-04 07:27:19.883447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.088 [2024-11-04 07:27:19.883454] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.088 [2024-11-04 07:27:19.883460] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.088 [2024-11-04 07:27:19.883467] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.088 [2024-11-04 07:27:19.883474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.088 [2024-11-04 07:27:19.883481] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.088 [2024-11-04 07:27:19.883488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.088 [2024-11-04 07:27:19.883494] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.088 [2024-11-04 07:27:19.883501] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.088 [2024-11-04 07:27:19.883507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.088 [2024-11-04 07:27:19.883514] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.088 [2024-11-04 07:27:19.883522] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.088 [2024-11-04 07:27:19.883529] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.088 [2024-11-04 07:27:19.883536] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.089 [2024-11-04 07:27:19.883545] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.089 [2024-11-04 07:27:19.883554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.089 [2024-11-04 07:27:19.883561] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.089 [2024-11-04 07:27:19.883568] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.089 [2024-11-04 07:27:19.883575] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.089 [2024-11-04 07:27:19.883581] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.089 [2024-11-04 07:27:19.883588] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.089 [2024-11-04 07:27:19.883595] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.089 [2024-11-04 07:27:19.883602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.089 [2024-11-04 07:27:19.883609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.089 [2024-11-04 07:27:19.883616] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.089 [2024-11-04 07:27:19.883622] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.089 [2024-11-04 07:27:19.883628] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.089 [2024-11-04 07:27:19.883635] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.089 [2024-11-04 07:27:19.883642] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.089 [2024-11-04 07:27:19.883649] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.089 [2024-11-04 07:27:19.883655] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.089 [2024-11-04 07:27:19.883662] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.089 [2024-11-04 07:27:19.883670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.089 [2024-11-04 07:27:19.883677] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.089 [2024-11-04 07:27:19.883684] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.089 [2024-11-04 07:27:19.883691] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.089 [2024-11-04 07:27:19.883699] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.089 [2024-11-04 07:27:19.883705] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.089 [2024-11-04 07:27:19.883712] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.089 [2024-11-04 07:27:19.883719] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.089 [2024-11-04 07:27:19.883725] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.089 [2024-11-04 07:27:19.883731] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.089 [2024-11-04 07:27:19.883739] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.089 [2024-11-04 07:27:19.883746] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.089 [2024-11-04 07:27:19.883753] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.089 [2024-11-04 07:27:19.883760] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.089 [2024-11-04 07:27:19.883767] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.089 [2024-11-04 07:27:19.883774] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.089 [2024-11-04 07:27:19.883781] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3380 is same with the state(5) to be set 00:21:18.089 07:27:19 -- host/failover.sh@50 -- # sleep 3 00:21:21.403 07:27:22 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:21.403 [2024-11-04 07:27:23.159474] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:21.403 07:27:23 -- host/failover.sh@55 -- # sleep 1 00:21:22.779 07:27:24 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:22.779 [2024-11-04 07:27:24.437813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.779 [2024-11-04 07:27:24.437863] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.779 [2024-11-04 07:27:24.437884] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.779 [2024-11-04 07:27:24.437893] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.437900] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.437907] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.437914] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.437921] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.437928] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.437935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.437942] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.437949] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.437956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.437963] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.437970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.437977] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.437983] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.437990] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.437996] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.438004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.438012] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.438019] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.438026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.438033] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.438040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.438048] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.438055] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.438062] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.438070] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.438078] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.438085] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.438092] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.438099] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.438105] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.438128] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.438140] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.438148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.438155] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.438161] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.438169] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.438176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.438183] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.438190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.438197] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.438203] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.438211] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.438218] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.438224] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.438230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.438238] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.438245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.438253] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.438259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.438266] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.438272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 [2024-11-04 07:27:24.438280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3a60 is same with the state(5) to be set 00:21:22.780 07:27:24 -- host/failover.sh@59 -- # wait 95262 00:21:29.355 0 00:21:29.355 07:27:30 -- host/failover.sh@61 -- # killprocess 95210 00:21:29.355 07:27:30 -- common/autotest_common.sh@926 -- # '[' -z 95210 ']' 00:21:29.355 07:27:30 -- common/autotest_common.sh@930 -- # kill -0 95210 00:21:29.355 07:27:30 -- common/autotest_common.sh@931 -- # uname 00:21:29.355 07:27:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:29.355 07:27:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 95210 00:21:29.355 07:27:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:29.355 07:27:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:29.355 killing process with pid 95210 00:21:29.355 07:27:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 95210' 00:21:29.355 07:27:30 -- common/autotest_common.sh@945 -- # kill 95210 00:21:29.355 07:27:30 -- common/autotest_common.sh@950 -- # wait 95210 00:21:29.355 07:27:30 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:29.355 [2024-11-04 07:27:13.314940] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:21:29.355 [2024-11-04 07:27:13.315030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95210 ] 00:21:29.355 [2024-11-04 07:27:13.451350] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.355 [2024-11-04 07:27:13.535087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.355 Running I/O for 15 seconds... 00:21:29.355 [2024-11-04 07:27:16.278605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.355 [2024-11-04 07:27:16.278657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.355 [2024-11-04 07:27:16.278686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.355 [2024-11-04 07:27:16.278701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.355 [2024-11-04 07:27:16.278716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.355 [2024-11-04 07:27:16.278730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.355 [2024-11-04 07:27:16.278745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.355 [2024-11-04 07:27:16.278758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.355 [2024-11-04 07:27:16.278772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.355 [2024-11-04 07:27:16.278785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.355 [2024-11-04 07:27:16.278800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.355 [2024-11-04 07:27:16.278812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.355 [2024-11-04 07:27:16.278829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.355 [2024-11-04 07:27:16.278847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.355 [2024-11-04 07:27:16.278874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.355 [2024-11-04 07:27:16.278907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.355 [2024-11-04 07:27:16.278924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.355 [2024-11-04 07:27:16.278938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.355 [2024-11-04 07:27:16.278952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.355 [2024-11-04 07:27:16.278964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.355 [2024-11-04 07:27:16.278979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.355 [2024-11-04 07:27:16.278991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.355 [2024-11-04 07:27:16.279027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.355 [2024-11-04 07:27:16.279041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.355 [2024-11-04 07:27:16.279056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.355 [2024-11-04 07:27:16.279068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.355 [2024-11-04 07:27:16.279082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.355 [2024-11-04 07:27:16.279094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.355 [2024-11-04 07:27:16.279108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.355 [2024-11-04 07:27:16.279121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.355 [2024-11-04 07:27:16.279134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.355 [2024-11-04 07:27:16.279147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.355 [2024-11-04 07:27:16.279161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.355 [2024-11-04 07:27:16.279182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.355 [2024-11-04 07:27:16.279197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.355 [2024-11-04 07:27:16.279209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.355 [2024-11-04 07:27:16.279223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.355 [2024-11-04 07:27:16.279235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.355 [2024-11-04 07:27:16.279249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.355 [2024-11-04 07:27:16.279261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.355 [2024-11-04 07:27:16.279275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.355 [2024-11-04 07:27:16.279287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.355 [2024-11-04 07:27:16.279301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.355 [2024-11-04 07:27:16.279313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.355 [2024-11-04 07:27:16.279327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.355 [2024-11-04 07:27:16.279339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.355 [2024-11-04 07:27:16.279353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.355 [2024-11-04 07:27:16.279373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.355 [2024-11-04 07:27:16.279388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.355 [2024-11-04 07:27:16.279401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.355 [2024-11-04 07:27:16.279414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.355 [2024-11-04 07:27:16.279427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.355 [2024-11-04 07:27:16.279440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.355 [2024-11-04 07:27:16.279453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.355 [2024-11-04 07:27:16.279467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.355 [2024-11-04 07:27:16.279479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.355 [2024-11-04 07:27:16.279493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.355 [2024-11-04 07:27:16.279505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.355 [2024-11-04 07:27:16.279518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.355 [2024-11-04 07:27:16.279531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.279545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.356 [2024-11-04 07:27:16.279557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.279570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.356 [2024-11-04 07:27:16.279583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.279596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.356 [2024-11-04 07:27:16.279615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.279630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.356 [2024-11-04 07:27:16.279642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.279656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.356 [2024-11-04 07:27:16.279669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.279682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.356 [2024-11-04 07:27:16.279695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.279715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.356 [2024-11-04 07:27:16.279728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.279743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.356 [2024-11-04 07:27:16.279755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.279769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.356 [2024-11-04 07:27:16.279782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.279795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.356 [2024-11-04 07:27:16.279808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.279822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.356 [2024-11-04 07:27:16.279834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.279848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.356 [2024-11-04 07:27:16.279860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.279885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.356 [2024-11-04 07:27:16.279900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.279919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.356 [2024-11-04 07:27:16.279931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.279945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.356 [2024-11-04 07:27:16.279958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.279971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.356 [2024-11-04 07:27:16.279984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.279998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.356 [2024-11-04 07:27:16.280010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.280024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.356 [2024-11-04 07:27:16.280037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.280051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.356 [2024-11-04 07:27:16.280069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.280091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.356 [2024-11-04 07:27:16.280105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.280119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.356 [2024-11-04 07:27:16.280131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.280145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.356 [2024-11-04 07:27:16.280158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.280172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.356 [2024-11-04 07:27:16.280185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.280198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.356 [2024-11-04 07:27:16.280211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.280224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.356 [2024-11-04 07:27:16.280236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.280250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.356 [2024-11-04 07:27:16.280263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.280276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.356 [2024-11-04 07:27:16.280289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.280303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.356 [2024-11-04 07:27:16.280315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.280329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.356 [2024-11-04 07:27:16.280352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.280366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.356 [2024-11-04 07:27:16.280378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.280392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.356 [2024-11-04 07:27:16.280404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.280418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.356 [2024-11-04 07:27:16.280438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.280452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.356 [2024-11-04 07:27:16.280465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.280479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.356 [2024-11-04 07:27:16.280491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.280505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.356 [2024-11-04 07:27:16.280523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.280537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.356 [2024-11-04 07:27:16.280549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.280563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.356 [2024-11-04 07:27:16.280575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.280589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.356 [2024-11-04 07:27:16.280601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.280615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.356 [2024-11-04 07:27:16.280627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.356 [2024-11-04 07:27:16.280641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.356 [2024-11-04 07:27:16.280653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.280667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.357 [2024-11-04 07:27:16.280679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.280693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.357 [2024-11-04 07:27:16.280705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.280719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.357 [2024-11-04 07:27:16.280731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.280745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.357 [2024-11-04 07:27:16.280757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.280777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.357 [2024-11-04 07:27:16.280790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.280804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.357 [2024-11-04 07:27:16.280816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.280830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.357 [2024-11-04 07:27:16.280842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.280862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.357 [2024-11-04 07:27:16.280887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.280903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.357 [2024-11-04 07:27:16.280917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.280931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.357 [2024-11-04 07:27:16.280944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.280958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.357 [2024-11-04 07:27:16.280975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.280989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.357 [2024-11-04 07:27:16.281002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.281015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.357 [2024-11-04 07:27:16.281028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.281042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.357 [2024-11-04 07:27:16.281054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.281068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.357 [2024-11-04 07:27:16.281080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.281094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.357 [2024-11-04 07:27:16.281106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.281119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.357 [2024-11-04 07:27:16.281138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.281153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.357 [2024-11-04 07:27:16.281166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.281179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.357 [2024-11-04 07:27:16.281191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.281205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.357 [2024-11-04 07:27:16.281217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.281231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.357 [2024-11-04 07:27:16.281243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.281257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.357 [2024-11-04 07:27:16.281269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.281282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.357 [2024-11-04 07:27:16.281294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.281318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.357 [2024-11-04 07:27:16.281331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.281345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.357 [2024-11-04 07:27:16.281357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.281370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.357 [2024-11-04 07:27:16.281382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.281396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.357 [2024-11-04 07:27:16.281414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.281428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.357 [2024-11-04 07:27:16.281440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.281454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.357 [2024-11-04 07:27:16.281466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.281486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.357 [2024-11-04 07:27:16.281499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.281513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.357 [2024-11-04 07:27:16.281526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.281539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.357 [2024-11-04 07:27:16.281552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.281566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.357 [2024-11-04 07:27:16.281579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.281593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.357 [2024-11-04 07:27:16.281605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.281619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.357 [2024-11-04 07:27:16.281632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.281646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.357 [2024-11-04 07:27:16.281659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.281672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.357 [2024-11-04 07:27:16.281684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.281698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.357 [2024-11-04 07:27:16.281710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.281724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.357 [2024-11-04 07:27:16.281736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.357 [2024-11-04 07:27:16.281755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.357 [2024-11-04 07:27:16.281768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.358 [2024-11-04 07:27:16.281782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.358 [2024-11-04 07:27:16.281794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.358 [2024-11-04 07:27:16.281807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.358 [2024-11-04 07:27:16.281819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.358 [2024-11-04 07:27:16.281841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.358 [2024-11-04 07:27:16.281860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.358 [2024-11-04 07:27:16.281885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.358 [2024-11-04 07:27:16.281900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.358 [2024-11-04 07:27:16.281914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.358 [2024-11-04 07:27:16.281927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.358 [2024-11-04 07:27:16.281941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.358 [2024-11-04 07:27:16.281953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.358 [2024-11-04 07:27:16.281968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.358 [2024-11-04 07:27:16.281980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.358 [2024-11-04 07:27:16.281995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.358 [2024-11-04 07:27:16.282008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.358 [2024-11-04 07:27:16.282022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.358 [2024-11-04 07:27:16.282034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.358 [2024-11-04 07:27:16.282048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.358 [2024-11-04 07:27:16.282060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.358 [2024-11-04 07:27:16.282074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.358 [2024-11-04 07:27:16.282086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.358 [2024-11-04 07:27:16.282100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.358 [2024-11-04 07:27:16.282113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.358 [2024-11-04 07:27:16.282127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.358 [2024-11-04 07:27:16.282139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.358 [2024-11-04 07:27:16.282153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.358 [2024-11-04 07:27:16.282165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.358 [2024-11-04 07:27:16.282179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.358 [2024-11-04 07:27:16.282198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.358 [2024-11-04 07:27:16.282219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.358 [2024-11-04 07:27:16.282232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.358 [2024-11-04 07:27:16.282246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.358 [2024-11-04 07:27:16.282259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.358 [2024-11-04 07:27:16.282272] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bba130 is same with the state(5) to be set 00:21:29.358 [2024-11-04 07:27:16.282288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.358 [2024-11-04 07:27:16.282298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.358 [2024-11-04 07:27:16.282314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15960 len:8 PRP1 0x0 PRP2 0x0 00:21:29.358 [2024-11-04 07:27:16.282326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.358 [2024-11-04 07:27:16.282393] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1bba130 was disconnected and freed. reset controller. 00:21:29.358 [2024-11-04 07:27:16.282412] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:29.358 [2024-11-04 07:27:16.282470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.358 [2024-11-04 07:27:16.282491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.358 [2024-11-04 07:27:16.282526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.358 [2024-11-04 07:27:16.282543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.358 [2024-11-04 07:27:16.282557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.358 [2024-11-04 07:27:16.282569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.358 [2024-11-04 07:27:16.282582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.358 [2024-11-04 07:27:16.282594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.358 [2024-11-04 07:27:16.282607] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:29.358 [2024-11-04 07:27:16.284840] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:29.358 [2024-11-04 07:27:16.284900] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b35cb0 (9): Bad file descriptor 00:21:29.358 [2024-11-04 07:27:16.305814] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:29.358 [2024-11-04 07:27:19.883874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:60576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.358 [2024-11-04 07:27:19.883963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.358 [2024-11-04 07:27:19.883990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:60600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.358 [2024-11-04 07:27:19.884027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.358 [2024-11-04 07:27:19.884043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:60616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.358 [2024-11-04 07:27:19.884055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.358 [2024-11-04 07:27:19.884069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:59944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.358 [2024-11-04 07:27:19.884081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.358 [2024-11-04 07:27:19.884094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:59960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.358 [2024-11-04 07:27:19.884106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.358 [2024-11-04 07:27:19.884119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:59984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.358 [2024-11-04 07:27:19.884131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.358 [2024-11-04 07:27:19.884144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:60008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.358 [2024-11-04 07:27:19.884155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.358 [2024-11-04 07:27:19.884168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.358 [2024-11-04 07:27:19.884180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.358 [2024-11-04 07:27:19.884193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.358 [2024-11-04 07:27:19.884204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.358 [2024-11-04 07:27:19.884217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:60032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.358 [2024-11-04 07:27:19.884229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.358 [2024-11-04 07:27:19.884242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:60040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.358 [2024-11-04 07:27:19.884268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.358 [2024-11-04 07:27:19.884281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:60640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.358 [2024-11-04 07:27:19.884292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.358 [2024-11-04 07:27:19.884304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:60648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.358 [2024-11-04 07:27:19.884315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.358 [2024-11-04 07:27:19.884328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:60656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.358 [2024-11-04 07:27:19.884339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.358 [2024-11-04 07:27:19.884358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:60664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.359 [2024-11-04 07:27:19.884370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.359 [2024-11-04 07:27:19.884383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:60672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.359 [2024-11-04 07:27:19.884394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.359 [2024-11-04 07:27:19.884407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:60680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.359 [2024-11-04 07:27:19.884420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.359 [2024-11-04 07:27:19.884433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:60704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.359 [2024-11-04 07:27:19.884444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.359 [2024-11-04 07:27:19.884457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:60056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.359 [2024-11-04 07:27:19.884468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.359 [2024-11-04 07:27:19.884481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:60096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.359 [2024-11-04 07:27:19.884492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.359 [2024-11-04 07:27:19.884504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:60104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.359 [2024-11-04 07:27:19.884516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.359 [2024-11-04 07:27:19.884529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:60112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.359 [2024-11-04 07:27:19.884540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.359 [2024-11-04 07:27:19.884553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:60120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.359 [2024-11-04 07:27:19.884564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.359 [2024-11-04 07:27:19.884577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:60128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.359 [2024-11-04 07:27:19.884589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.359 [2024-11-04 07:27:19.884602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:60136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.359 [2024-11-04 07:27:19.884613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.359 [2024-11-04 07:27:19.884626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:60152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.359 [2024-11-04 07:27:19.884637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.359 [2024-11-04 07:27:19.884649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.359 [2024-11-04 07:27:19.884667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.359 [2024-11-04 07:27:19.884681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:60744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.359 [2024-11-04 07:27:19.884692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.359 [2024-11-04 07:27:19.884705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:60768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.359 [2024-11-04 07:27:19.884717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.359 [2024-11-04 07:27:19.884729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:60784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.359 [2024-11-04 07:27:19.884740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.359 [2024-11-04 07:27:19.884753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:60792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.359 [2024-11-04 07:27:19.884764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.359 [2024-11-04 07:27:19.884776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:60808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.359 [2024-11-04 07:27:19.884788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.359 [2024-11-04 07:27:19.884801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.359 [2024-11-04 07:27:19.884819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.359 [2024-11-04 07:27:19.884832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:60160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.359 [2024-11-04 07:27:19.884844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.359 [2024-11-04 07:27:19.884856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:60168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.359 [2024-11-04 07:27:19.884868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.359 [2024-11-04 07:27:19.884880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:60176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.359 [2024-11-04 07:27:19.884891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.359 [2024-11-04 07:27:19.884914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:60192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.359 [2024-11-04 07:27:19.884928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.359 [2024-11-04 07:27:19.884941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:60208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.359 [2024-11-04 07:27:19.884952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.359 [2024-11-04 07:27:19.884965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:60216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.359 [2024-11-04 07:27:19.884977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.359 [2024-11-04 07:27:19.884990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:60240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.359 [2024-11-04 07:27:19.885008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.359 [2024-11-04 07:27:19.885022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:60248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.359 [2024-11-04 07:27:19.885033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.359 [2024-11-04 07:27:19.885046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.359 [2024-11-04 07:27:19.885057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.359 [2024-11-04 07:27:19.885070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:60832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.359 [2024-11-04 07:27:19.885083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.359 [2024-11-04 07:27:19.885095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:60840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.359 [2024-11-04 07:27:19.885106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.359 [2024-11-04 07:27:19.885119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:60848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.359 [2024-11-04 07:27:19.885130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.359 [2024-11-04 07:27:19.885143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.359 [2024-11-04 07:27:19.885154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.359 [2024-11-04 07:27:19.885166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.359 [2024-11-04 07:27:19.885178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.359 [2024-11-04 07:27:19.885190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:60872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.360 [2024-11-04 07:27:19.885202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.885214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.360 [2024-11-04 07:27:19.885231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.885244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:60888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.360 [2024-11-04 07:27:19.885255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.885268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:60896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.360 [2024-11-04 07:27:19.885279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.885292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:60904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.360 [2024-11-04 07:27:19.885303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.885323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:60912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.360 [2024-11-04 07:27:19.885335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.885347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:60920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.360 [2024-11-04 07:27:19.885359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.885372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.360 [2024-11-04 07:27:19.885383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.885396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:60272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.360 [2024-11-04 07:27:19.885407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.885419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:60280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.360 [2024-11-04 07:27:19.885431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.885444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:60288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.360 [2024-11-04 07:27:19.885455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.885469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:60304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.360 [2024-11-04 07:27:19.885480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.885493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:60312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.360 [2024-11-04 07:27:19.885504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.885517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:60328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.360 [2024-11-04 07:27:19.885528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.885540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:60352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.360 [2024-11-04 07:27:19.885552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.885564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:60928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.360 [2024-11-04 07:27:19.885576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.885588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:60936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.360 [2024-11-04 07:27:19.885600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.885612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:60944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.360 [2024-11-04 07:27:19.885635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.885648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:60952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.360 [2024-11-04 07:27:19.885660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.885672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:60376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.360 [2024-11-04 07:27:19.885683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.885696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:60408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.360 [2024-11-04 07:27:19.885708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.885720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.360 [2024-11-04 07:27:19.885732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.885744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:60496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.360 [2024-11-04 07:27:19.885756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.885770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:60504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.360 [2024-11-04 07:27:19.885781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.885793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:60528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.360 [2024-11-04 07:27:19.885805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.885817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:60536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.360 [2024-11-04 07:27:19.885828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.885841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:60544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.360 [2024-11-04 07:27:19.885852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.885864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.360 [2024-11-04 07:27:19.885886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.885900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.360 [2024-11-04 07:27:19.885912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.885925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:60976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.360 [2024-11-04 07:27:19.885936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.885957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:60984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.360 [2024-11-04 07:27:19.885969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.885982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.360 [2024-11-04 07:27:19.885993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.886005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.360 [2024-11-04 07:27:19.886017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.886029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:61008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.360 [2024-11-04 07:27:19.886047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.886060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:61016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.360 [2024-11-04 07:27:19.886071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.886084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:61024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.360 [2024-11-04 07:27:19.886096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.886109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.360 [2024-11-04 07:27:19.886120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.886132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:61040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.360 [2024-11-04 07:27:19.886144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.886157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.360 [2024-11-04 07:27:19.886168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.886181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:61056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.360 [2024-11-04 07:27:19.886192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.360 [2024-11-04 07:27:19.886205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.360 [2024-11-04 07:27:19.886216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.886229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:61072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.361 [2024-11-04 07:27:19.886240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.886253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:60560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.361 [2024-11-04 07:27:19.886270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.886283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:60568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.361 [2024-11-04 07:27:19.886294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.886307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:60584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.361 [2024-11-04 07:27:19.886318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.886331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:60592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.361 [2024-11-04 07:27:19.886342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.886355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:60608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.361 [2024-11-04 07:27:19.886366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.886378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:60624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.361 [2024-11-04 07:27:19.886389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.886403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:60632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.361 [2024-11-04 07:27:19.886414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.886426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:60688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.361 [2024-11-04 07:27:19.886443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.886455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.361 [2024-11-04 07:27:19.886467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.886479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:61088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.361 [2024-11-04 07:27:19.886490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.886524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.361 [2024-11-04 07:27:19.886537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.886550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.361 [2024-11-04 07:27:19.886561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.886574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:61112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.361 [2024-11-04 07:27:19.886586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.886600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:61120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.361 [2024-11-04 07:27:19.886622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.886636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:61128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.361 [2024-11-04 07:27:19.886648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.886661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:61136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.361 [2024-11-04 07:27:19.886672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.886685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.361 [2024-11-04 07:27:19.886696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.886708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:61152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.361 [2024-11-04 07:27:19.886720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.886732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:61160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.361 [2024-11-04 07:27:19.886743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.886756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.361 [2024-11-04 07:27:19.886767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.886779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.361 [2024-11-04 07:27:19.886791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.886803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:61184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.361 [2024-11-04 07:27:19.886814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.886827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.361 [2024-11-04 07:27:19.886838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.886868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.361 [2024-11-04 07:27:19.886886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.886911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:61208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.361 [2024-11-04 07:27:19.886936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.886949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.361 [2024-11-04 07:27:19.886961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.886992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.361 [2024-11-04 07:27:19.887005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.887018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.361 [2024-11-04 07:27:19.887030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.887049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:61240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.361 [2024-11-04 07:27:19.887061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.887074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.361 [2024-11-04 07:27:19.887086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.887099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:61256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.361 [2024-11-04 07:27:19.887111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.887124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:60696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.361 [2024-11-04 07:27:19.887135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.887161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:60720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.361 [2024-11-04 07:27:19.887173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.887186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:60728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.361 [2024-11-04 07:27:19.887198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.887211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:60736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.361 [2024-11-04 07:27:19.887222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.887235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:60752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.361 [2024-11-04 07:27:19.887247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.887275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:60760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.361 [2024-11-04 07:27:19.887286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.887299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:60776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.361 [2024-11-04 07:27:19.887310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.361 [2024-11-04 07:27:19.887323] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94b10 is same with the state(5) to be set 00:21:29.361 [2024-11-04 07:27:19.887344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.361 [2024-11-04 07:27:19.887354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.362 [2024-11-04 07:27:19.887368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60800 len:8 PRP1 0x0 PRP2 0x0 00:21:29.362 [2024-11-04 07:27:19.887380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.362 [2024-11-04 07:27:19.887443] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b94b10 was disconnected and freed. reset controller. 00:21:29.362 [2024-11-04 07:27:19.887460] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:29.362 [2024-11-04 07:27:19.887513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.362 [2024-11-04 07:27:19.887533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.362 [2024-11-04 07:27:19.887546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.362 [2024-11-04 07:27:19.887557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.362 [2024-11-04 07:27:19.887570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.362 [2024-11-04 07:27:19.887581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.362 [2024-11-04 07:27:19.887593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.362 [2024-11-04 07:27:19.887604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.362 [2024-11-04 07:27:19.887615] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:29.362 [2024-11-04 07:27:19.887647] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b35cb0 (9): Bad file descriptor 00:21:29.362 [2024-11-04 07:27:19.889642] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:29.362 [2024-11-04 07:27:19.907410] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:29.362 [2024-11-04 07:27:24.438388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.362 [2024-11-04 07:27:24.438481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.362 [2024-11-04 07:27:24.438533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:92768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.362 [2024-11-04 07:27:24.438550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.362 [2024-11-04 07:27:24.438565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.362 [2024-11-04 07:27:24.438577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.362 [2024-11-04 07:27:24.438590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.362 [2024-11-04 07:27:24.438602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.362 [2024-11-04 07:27:24.438623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:92120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.362 [2024-11-04 07:27:24.438653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.362 [2024-11-04 07:27:24.438668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:92128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.362 [2024-11-04 07:27:24.438681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.362 [2024-11-04 07:27:24.438694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:92136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.362 [2024-11-04 07:27:24.438705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.362 [2024-11-04 07:27:24.438718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:92144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.362 [2024-11-04 07:27:24.438729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.362 [2024-11-04 07:27:24.438743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:92160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.362 [2024-11-04 07:27:24.438754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.362 [2024-11-04 07:27:24.438767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:92168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.362 [2024-11-04 07:27:24.438778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.362 [2024-11-04 07:27:24.438791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:92776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.362 [2024-11-04 07:27:24.438803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.362 [2024-11-04 07:27:24.438816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:92784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.362 [2024-11-04 07:27:24.438827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.362 [2024-11-04 07:27:24.438842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:92184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.362 [2024-11-04 07:27:24.438867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.362 [2024-11-04 07:27:24.438880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:92192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.362 [2024-11-04 07:27:24.438903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.362 [2024-11-04 07:27:24.438917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:92216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.362 [2024-11-04 07:27:24.438929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.362 [2024-11-04 07:27:24.438942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:92232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.362 [2024-11-04 07:27:24.438952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.362 [2024-11-04 07:27:24.438983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:92240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.362 [2024-11-04 07:27:24.438997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.362 [2024-11-04 07:27:24.439019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:92248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.362 [2024-11-04 07:27:24.439033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.362 [2024-11-04 07:27:24.439047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:92256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.362 [2024-11-04 07:27:24.439059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.362 [2024-11-04 07:27:24.439073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:92264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.362 [2024-11-04 07:27:24.439085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.362 [2024-11-04 07:27:24.439099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:92800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.362 [2024-11-04 07:27:24.439111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.362 [2024-11-04 07:27:24.439124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:92824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.362 [2024-11-04 07:27:24.439136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.362 [2024-11-04 07:27:24.439149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:92832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.362 [2024-11-04 07:27:24.439161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.362 [2024-11-04 07:27:24.439174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:92848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.362 [2024-11-04 07:27:24.439186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.362 [2024-11-04 07:27:24.439199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:92856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.362 [2024-11-04 07:27:24.439211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.362 [2024-11-04 07:27:24.439224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:92872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.362 [2024-11-04 07:27:24.439237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.362 [2024-11-04 07:27:24.439261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.362 [2024-11-04 07:27:24.439273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.362 [2024-11-04 07:27:24.439286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:92888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.362 [2024-11-04 07:27:24.439299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.362 [2024-11-04 07:27:24.439312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:92896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.362 [2024-11-04 07:27:24.439324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.362 [2024-11-04 07:27:24.439338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:92904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.362 [2024-11-04 07:27:24.439350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.362 [2024-11-04 07:27:24.439375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:92912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.362 [2024-11-04 07:27:24.439388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.362 [2024-11-04 07:27:24.439417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:92920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.362 [2024-11-04 07:27:24.439429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.362 [2024-11-04 07:27:24.439442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:92928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.363 [2024-11-04 07:27:24.439455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.363 [2024-11-04 07:27:24.439467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.363 [2024-11-04 07:27:24.439478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.363 [2024-11-04 07:27:24.439491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:92944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.363 [2024-11-04 07:27:24.439502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.363 [2024-11-04 07:27:24.439515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:92952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.363 [2024-11-04 07:27:24.439526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.363 [2024-11-04 07:27:24.439539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.363 [2024-11-04 07:27:24.439551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.363 [2024-11-04 07:27:24.439563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:92968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.363 [2024-11-04 07:27:24.439574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.363 [2024-11-04 07:27:24.439587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:92976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.363 [2024-11-04 07:27:24.439598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.363 [2024-11-04 07:27:24.439611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.363 [2024-11-04 07:27:24.439622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.363 [2024-11-04 07:27:24.439634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:92992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.363 [2024-11-04 07:27:24.439646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.363 [2024-11-04 07:27:24.439659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:93000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.363 [2024-11-04 07:27:24.439671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.363 [2024-11-04 07:27:24.439683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.363 [2024-11-04 07:27:24.439701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.363 [2024-11-04 07:27:24.439714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:93016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.363 [2024-11-04 07:27:24.439726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.363 [2024-11-04 07:27:24.439739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:93024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.363 [2024-11-04 07:27:24.439751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.363 [2024-11-04 07:27:24.439763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:93032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.363 [2024-11-04 07:27:24.439774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.363 [2024-11-04 07:27:24.439787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:93040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.363 [2024-11-04 07:27:24.439798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.363 [2024-11-04 07:27:24.439811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:92280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.363 [2024-11-04 07:27:24.439823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.363 [2024-11-04 07:27:24.439836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:92288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.363 [2024-11-04 07:27:24.439847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.363 [2024-11-04 07:27:24.439860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:92296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.363 [2024-11-04 07:27:24.439872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.363 [2024-11-04 07:27:24.439896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:92312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.363 [2024-11-04 07:27:24.439919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.363 [2024-11-04 07:27:24.439933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:92320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.363 [2024-11-04 07:27:24.439945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.363 [2024-11-04 07:27:24.439957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:92328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.363 [2024-11-04 07:27:24.439969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.363 [2024-11-04 07:27:24.439981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:92336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.363 [2024-11-04 07:27:24.439993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.363 [2024-11-04 07:27:24.440005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:92352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.363 [2024-11-04 07:27:24.440016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.363 [2024-11-04 07:27:24.440036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:92368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.363 [2024-11-04 07:27:24.440048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.363 [2024-11-04 07:27:24.440060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:92392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.363 [2024-11-04 07:27:24.440071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.363 [2024-11-04 07:27:24.440084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.363 [2024-11-04 07:27:24.440095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.363 [2024-11-04 07:27:24.440108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.363 [2024-11-04 07:27:24.440119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.363 [2024-11-04 07:27:24.440132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:92440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.363 [2024-11-04 07:27:24.440143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.363 [2024-11-04 07:27:24.440156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.363 [2024-11-04 07:27:24.440167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.363 [2024-11-04 07:27:24.440179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:92464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.363 [2024-11-04 07:27:24.440200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.363 [2024-11-04 07:27:24.440212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:92480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.363 [2024-11-04 07:27:24.440223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.363 [2024-11-04 07:27:24.440236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:93048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.363 [2024-11-04 07:27:24.440247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.363 [2024-11-04 07:27:24.440260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:93056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.363 [2024-11-04 07:27:24.440271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.363 [2024-11-04 07:27:24.440283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:93064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.363 [2024-11-04 07:27:24.440295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.363 [2024-11-04 07:27:24.440307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.363 [2024-11-04 07:27:24.440320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.363 [2024-11-04 07:27:24.440333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:93080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.363 [2024-11-04 07:27:24.440350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.363 [2024-11-04 07:27:24.440364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:93088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.363 [2024-11-04 07:27:24.440375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.363 [2024-11-04 07:27:24.440388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:93096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.364 [2024-11-04 07:27:24.440399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.364 [2024-11-04 07:27:24.440412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:93104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.364 [2024-11-04 07:27:24.440423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.364 [2024-11-04 07:27:24.440435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:93112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.364 [2024-11-04 07:27:24.440446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.364 [2024-11-04 07:27:24.440459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.364 [2024-11-04 07:27:24.440470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.364 [2024-11-04 07:27:24.440482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:93128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.364 [2024-11-04 07:27:24.440493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.364 [2024-11-04 07:27:24.440521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.364 [2024-11-04 07:27:24.440534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.364 [2024-11-04 07:27:24.440547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.364 [2024-11-04 07:27:24.440560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.364 [2024-11-04 07:27:24.440573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:93152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.364 [2024-11-04 07:27:24.440585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.364 [2024-11-04 07:27:24.440598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.364 [2024-11-04 07:27:24.440610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.364 [2024-11-04 07:27:24.440623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:93168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.364 [2024-11-04 07:27:24.440635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.364 [2024-11-04 07:27:24.440648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:93176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.364 [2024-11-04 07:27:24.440675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.364 [2024-11-04 07:27:24.440689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:93184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.364 [2024-11-04 07:27:24.440708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.364 [2024-11-04 07:27:24.440722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:93192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.364 [2024-11-04 07:27:24.440735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.364 [2024-11-04 07:27:24.440748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:92496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.364 [2024-11-04 07:27:24.440761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.364 [2024-11-04 07:27:24.440775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:92504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.364 [2024-11-04 07:27:24.440787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.364 [2024-11-04 07:27:24.440801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.364 [2024-11-04 07:27:24.440814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.364 [2024-11-04 07:27:24.440834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:92536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.364 [2024-11-04 07:27:24.440846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.364 [2024-11-04 07:27:24.440859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:92568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.364 [2024-11-04 07:27:24.440871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.364 [2024-11-04 07:27:24.440885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:92576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.364 [2024-11-04 07:27:24.440897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.364 [2024-11-04 07:27:24.440956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.364 [2024-11-04 07:27:24.440971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.364 [2024-11-04 07:27:24.440985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:92616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.364 [2024-11-04 07:27:24.440998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.364 [2024-11-04 07:27:24.441027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.364 [2024-11-04 07:27:24.441055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.364 [2024-11-04 07:27:24.441069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:92640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.364 [2024-11-04 07:27:24.441097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.364 [2024-11-04 07:27:24.441111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:92664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.364 [2024-11-04 07:27:24.441124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.364 [2024-11-04 07:27:24.441146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:92672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.364 [2024-11-04 07:27:24.441159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.364 [2024-11-04 07:27:24.441173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:92688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.364 [2024-11-04 07:27:24.441185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.364 [2024-11-04 07:27:24.441198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:92704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.364 [2024-11-04 07:27:24.441211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.364 [2024-11-04 07:27:24.441224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:92712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.364 [2024-11-04 07:27:24.441236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.364 [2024-11-04 07:27:24.441250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:92720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.364 [2024-11-04 07:27:24.441263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.364 [2024-11-04 07:27:24.441292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:93200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.364 [2024-11-04 07:27:24.441304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.364 [2024-11-04 07:27:24.441317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:93208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.364 [2024-11-04 07:27:24.441328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.364 [2024-11-04 07:27:24.441341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:93216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.364 [2024-11-04 07:27:24.441353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.364 [2024-11-04 07:27:24.441366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:93224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.364 [2024-11-04 07:27:24.441378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.364 [2024-11-04 07:27:24.441391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:93232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.364 [2024-11-04 07:27:24.441418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.364 [2024-11-04 07:27:24.441430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:93240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.364 [2024-11-04 07:27:24.441442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.364 [2024-11-04 07:27:24.441455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:93248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.364 [2024-11-04 07:27:24.441465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.364 [2024-11-04 07:27:24.441479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.364 [2024-11-04 07:27:24.441496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.364 [2024-11-04 07:27:24.441510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.364 [2024-11-04 07:27:24.441522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.364 [2024-11-04 07:27:24.441535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.364 [2024-11-04 07:27:24.441552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.365 [2024-11-04 07:27:24.441565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:93280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.365 [2024-11-04 07:27:24.441577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.365 [2024-11-04 07:27:24.441589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:93288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.365 [2024-11-04 07:27:24.441601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.365 [2024-11-04 07:27:24.441613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.365 [2024-11-04 07:27:24.441625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.365 [2024-11-04 07:27:24.441637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.365 [2024-11-04 07:27:24.441649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.365 [2024-11-04 07:27:24.441662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.365 [2024-11-04 07:27:24.441673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.365 [2024-11-04 07:27:24.441685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:93320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.365 [2024-11-04 07:27:24.441697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.365 [2024-11-04 07:27:24.441709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.365 [2024-11-04 07:27:24.441720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.365 [2024-11-04 07:27:24.441733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.365 [2024-11-04 07:27:24.441744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.365 [2024-11-04 07:27:24.441757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:93344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.365 [2024-11-04 07:27:24.441768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.365 [2024-11-04 07:27:24.441780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:93352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.365 [2024-11-04 07:27:24.441791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.365 [2024-11-04 07:27:24.441810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.365 [2024-11-04 07:27:24.441822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.365 [2024-11-04 07:27:24.441834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:93368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.365 [2024-11-04 07:27:24.441846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.365 [2024-11-04 07:27:24.441858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:92736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.365 [2024-11-04 07:27:24.441871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.365 [2024-11-04 07:27:24.441884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:92744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.365 [2024-11-04 07:27:24.441895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.365 [2024-11-04 07:27:24.441908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:92752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.365 [2024-11-04 07:27:24.441919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.365 [2024-11-04 07:27:24.441931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:92792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.365 [2024-11-04 07:27:24.441958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.365 [2024-11-04 07:27:24.441973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.365 [2024-11-04 07:27:24.441985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.365 [2024-11-04 07:27:24.441998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:92816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.365 [2024-11-04 07:27:24.442009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.365 [2024-11-04 07:27:24.442022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:92840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.365 [2024-11-04 07:27:24.442033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.365 [2024-11-04 07:27:24.442045] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbc210 is same with the state(5) to be set 00:21:29.365 [2024-11-04 07:27:24.442061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.365 [2024-11-04 07:27:24.442071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.365 [2024-11-04 07:27:24.442080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92864 len:8 PRP1 0x0 PRP2 0x0 00:21:29.365 [2024-11-04 07:27:24.442092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.365 [2024-11-04 07:27:24.442156] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1bbc210 was disconnected and freed. reset controller. 00:21:29.365 [2024-11-04 07:27:24.442174] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:29.365 [2024-11-04 07:27:24.442229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.365 [2024-11-04 07:27:24.442257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.365 [2024-11-04 07:27:24.442272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.365 [2024-11-04 07:27:24.442283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.365 [2024-11-04 07:27:24.442295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.365 [2024-11-04 07:27:24.442306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.365 [2024-11-04 07:27:24.442318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.365 [2024-11-04 07:27:24.442329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.365 [2024-11-04 07:27:24.442348] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:29.365 [2024-11-04 07:27:24.442393] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b35cb0 (9): Bad file descriptor 00:21:29.365 [2024-11-04 07:27:24.444607] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:29.365 [2024-11-04 07:27:24.466635] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:29.365 00:21:29.365 Latency(us) 00:21:29.365 [2024-11-04T07:27:31.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.365 [2024-11-04T07:27:31.206Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:29.365 Verification LBA range: start 0x0 length 0x4000 00:21:29.365 NVMe0n1 : 15.01 15091.58 58.95 236.52 0.00 8336.16 573.44 15073.28 00:21:29.365 [2024-11-04T07:27:31.206Z] =================================================================================================================== 00:21:29.365 [2024-11-04T07:27:31.206Z] Total : 15091.58 58.95 236.52 0.00 8336.16 573.44 15073.28 00:21:29.365 Received shutdown signal, test time was about 15.000000 seconds 00:21:29.365 00:21:29.365 Latency(us) 00:21:29.365 [2024-11-04T07:27:31.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.365 [2024-11-04T07:27:31.206Z] =================================================================================================================== 00:21:29.365 [2024-11-04T07:27:31.206Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:29.365 07:27:30 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:29.365 07:27:30 -- host/failover.sh@65 -- # count=3 00:21:29.365 07:27:30 -- host/failover.sh@67 -- # (( count != 3 )) 00:21:29.365 07:27:30 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:29.365 07:27:30 -- host/failover.sh@73 -- # bdevperf_pid=95465 00:21:29.365 07:27:30 -- host/failover.sh@75 -- # waitforlisten 95465 /var/tmp/bdevperf.sock 00:21:29.365 07:27:30 -- common/autotest_common.sh@819 -- # '[' -z 95465 ']' 00:21:29.365 07:27:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:29.365 07:27:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:29.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:29.365 07:27:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:29.365 07:27:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:29.365 07:27:30 -- common/autotest_common.sh@10 -- # set +x 00:21:29.624 07:27:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:29.624 07:27:31 -- common/autotest_common.sh@852 -- # return 0 00:21:29.624 07:27:31 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:29.882 [2024-11-04 07:27:31.659674] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:29.882 07:27:31 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:30.141 [2024-11-04 07:27:31.875809] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:30.141 07:27:31 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:30.399 NVMe0n1 00:21:30.399 07:27:32 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:30.657 00:21:30.657 07:27:32 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:30.916 00:21:30.916 07:27:32 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:30.916 07:27:32 -- host/failover.sh@82 -- # grep -q NVMe0 00:21:31.174 07:27:32 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:31.433 07:27:33 -- host/failover.sh@87 -- # sleep 3 00:21:34.719 07:27:36 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:34.719 07:27:36 -- host/failover.sh@88 -- # grep -q NVMe0 00:21:34.719 07:27:36 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:34.719 07:27:36 -- host/failover.sh@90 -- # run_test_pid=95604 00:21:34.720 07:27:36 -- host/failover.sh@92 -- # wait 95604 00:21:36.097 0 00:21:36.097 07:27:37 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:36.097 [2024-11-04 07:27:30.461187] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:21:36.097 [2024-11-04 07:27:30.461304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95465 ] 00:21:36.097 [2024-11-04 07:27:30.594161] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.097 [2024-11-04 07:27:30.674448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.097 [2024-11-04 07:27:33.174145] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:36.097 [2024-11-04 07:27:33.174254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.097 [2024-11-04 07:27:33.174293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.097 [2024-11-04 07:27:33.174317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.097 [2024-11-04 07:27:33.174329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.097 [2024-11-04 07:27:33.174342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.097 [2024-11-04 07:27:33.174353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.097 [2024-11-04 07:27:33.174366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.097 [2024-11-04 07:27:33.174378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.097 [2024-11-04 07:27:33.174391] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.097 [2024-11-04 07:27:33.174446] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.097 [2024-11-04 07:27:33.174475] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d50cb0 (9): Bad file descriptor 00:21:36.097 [2024-11-04 07:27:33.177795] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:36.097 Running I/O for 1 seconds... 00:21:36.097 00:21:36.097 Latency(us) 00:21:36.097 [2024-11-04T07:27:37.938Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.097 [2024-11-04T07:27:37.938Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:36.097 Verification LBA range: start 0x0 length 0x4000 00:21:36.097 NVMe0n1 : 1.01 15298.22 59.76 0.00 0.00 8329.28 1005.38 9889.98 00:21:36.097 [2024-11-04T07:27:37.938Z] =================================================================================================================== 00:21:36.097 [2024-11-04T07:27:37.938Z] Total : 15298.22 59.76 0.00 0.00 8329.28 1005.38 9889.98 00:21:36.097 07:27:37 -- host/failover.sh@95 -- # grep -q NVMe0 00:21:36.098 07:27:37 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:36.098 07:27:37 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:36.356 07:27:38 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:36.356 07:27:38 -- host/failover.sh@99 -- # grep -q NVMe0 00:21:36.615 07:27:38 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:36.873 07:27:38 -- host/failover.sh@101 -- # sleep 3 00:21:40.158 07:27:41 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:40.158 07:27:41 -- host/failover.sh@103 -- # grep -q NVMe0 00:21:40.158 07:27:41 -- host/failover.sh@108 -- # killprocess 95465 00:21:40.158 07:27:41 -- common/autotest_common.sh@926 -- # '[' -z 95465 ']' 00:21:40.158 07:27:41 -- common/autotest_common.sh@930 -- # kill -0 95465 00:21:40.158 07:27:41 -- common/autotest_common.sh@931 -- # uname 00:21:40.158 07:27:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:40.158 07:27:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 95465 00:21:40.158 killing process with pid 95465 00:21:40.158 07:27:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:40.158 07:27:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:40.158 07:27:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 95465' 00:21:40.158 07:27:41 -- common/autotest_common.sh@945 -- # kill 95465 00:21:40.158 07:27:41 -- common/autotest_common.sh@950 -- # wait 95465 00:21:40.417 07:27:42 -- host/failover.sh@110 -- # sync 00:21:40.417 07:27:42 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:40.676 07:27:42 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:40.676 07:27:42 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:40.676 07:27:42 -- host/failover.sh@116 -- # nvmftestfini 00:21:40.676 07:27:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:40.676 07:27:42 -- nvmf/common.sh@116 -- # sync 00:21:40.676 07:27:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:40.676 07:27:42 -- nvmf/common.sh@119 -- # set +e 00:21:40.676 07:27:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:40.676 07:27:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:40.676 rmmod nvme_tcp 00:21:40.676 rmmod nvme_fabrics 00:21:40.676 rmmod nvme_keyring 00:21:40.676 07:27:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:40.676 07:27:42 -- nvmf/common.sh@123 -- # set -e 00:21:40.676 07:27:42 -- nvmf/common.sh@124 -- # return 0 00:21:40.676 07:27:42 -- nvmf/common.sh@477 -- # '[' -n 95098 ']' 00:21:40.676 07:27:42 -- nvmf/common.sh@478 -- # killprocess 95098 00:21:40.676 07:27:42 -- common/autotest_common.sh@926 -- # '[' -z 95098 ']' 00:21:40.676 07:27:42 -- common/autotest_common.sh@930 -- # kill -0 95098 00:21:40.676 07:27:42 -- common/autotest_common.sh@931 -- # uname 00:21:40.676 07:27:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:40.676 07:27:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 95098 00:21:40.676 killing process with pid 95098 00:21:40.676 07:27:42 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:40.676 07:27:42 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:40.676 07:27:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 95098' 00:21:40.676 07:27:42 -- common/autotest_common.sh@945 -- # kill 95098 00:21:40.676 07:27:42 -- common/autotest_common.sh@950 -- # wait 95098 00:21:41.244 07:27:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:41.244 07:27:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:41.244 07:27:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:41.244 07:27:42 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:41.244 07:27:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:41.244 07:27:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.244 07:27:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:41.244 07:27:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.244 07:27:42 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:41.244 00:21:41.244 real 0m32.524s 00:21:41.244 user 2m6.059s 00:21:41.244 sys 0m5.069s 00:21:41.244 07:27:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:41.244 07:27:42 -- common/autotest_common.sh@10 -- # set +x 00:21:41.244 ************************************ 00:21:41.244 END TEST nvmf_failover 00:21:41.244 ************************************ 00:21:41.244 07:27:42 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:41.244 07:27:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:41.244 07:27:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:41.244 07:27:42 -- common/autotest_common.sh@10 -- # set +x 00:21:41.244 ************************************ 00:21:41.244 START TEST nvmf_discovery 00:21:41.244 ************************************ 00:21:41.244 07:27:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:41.244 * Looking for test storage... 00:21:41.244 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:41.244 07:27:42 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:41.244 07:27:42 -- nvmf/common.sh@7 -- # uname -s 00:21:41.244 07:27:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:41.244 07:27:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:41.244 07:27:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:41.244 07:27:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:41.244 07:27:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:41.244 07:27:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:41.244 07:27:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:41.244 07:27:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:41.244 07:27:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:41.244 07:27:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:41.244 07:27:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:21:41.244 07:27:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:21:41.244 07:27:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:41.244 07:27:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:41.244 07:27:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:41.244 07:27:42 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:41.244 07:27:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:41.244 07:27:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:41.244 07:27:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:41.244 07:27:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.244 07:27:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.244 07:27:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.244 07:27:42 -- paths/export.sh@5 -- # export PATH 00:21:41.244 07:27:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.244 07:27:42 -- nvmf/common.sh@46 -- # : 0 00:21:41.244 07:27:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:41.244 07:27:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:41.244 07:27:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:41.244 07:27:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:41.244 07:27:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:41.244 07:27:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:41.244 07:27:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:41.244 07:27:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:41.244 07:27:42 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:41.244 07:27:42 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:41.244 07:27:42 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:41.245 07:27:42 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:41.245 07:27:42 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:41.245 07:27:42 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:41.245 07:27:42 -- host/discovery.sh@25 -- # nvmftestinit 00:21:41.245 07:27:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:41.245 07:27:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:41.245 07:27:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:41.245 07:27:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:41.245 07:27:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:41.245 07:27:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.245 07:27:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:41.245 07:27:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.245 07:27:42 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:41.245 07:27:42 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:41.245 07:27:42 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:41.245 07:27:42 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:41.245 07:27:42 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:41.245 07:27:42 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:41.245 07:27:42 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:41.245 07:27:42 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:41.245 07:27:42 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:41.245 07:27:42 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:41.245 07:27:42 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:41.245 07:27:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:41.245 07:27:42 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:41.245 07:27:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:41.245 07:27:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:41.245 07:27:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:41.245 07:27:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:41.245 07:27:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:41.245 07:27:42 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:41.245 07:27:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:41.245 Cannot find device "nvmf_tgt_br" 00:21:41.245 07:27:43 -- nvmf/common.sh@154 -- # true 00:21:41.245 07:27:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:41.245 Cannot find device "nvmf_tgt_br2" 00:21:41.245 07:27:43 -- nvmf/common.sh@155 -- # true 00:21:41.245 07:27:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:41.245 07:27:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:41.245 Cannot find device "nvmf_tgt_br" 00:21:41.245 07:27:43 -- nvmf/common.sh@157 -- # true 00:21:41.245 07:27:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:41.245 Cannot find device "nvmf_tgt_br2" 00:21:41.245 07:27:43 -- nvmf/common.sh@158 -- # true 00:21:41.245 07:27:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:41.504 07:27:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:41.504 07:27:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:41.504 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:41.504 07:27:43 -- nvmf/common.sh@161 -- # true 00:21:41.504 07:27:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:41.504 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:41.504 07:27:43 -- nvmf/common.sh@162 -- # true 00:21:41.504 07:27:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:41.504 07:27:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:41.504 07:27:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:41.504 07:27:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:41.504 07:27:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:41.504 07:27:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:41.504 07:27:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:41.504 07:27:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:41.504 07:27:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:41.504 07:27:43 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:41.504 07:27:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:41.504 07:27:43 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:41.504 07:27:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:41.504 07:27:43 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:41.504 07:27:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:41.504 07:27:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:41.504 07:27:43 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:41.504 07:27:43 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:41.504 07:27:43 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:41.504 07:27:43 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:41.504 07:27:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:41.504 07:27:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:41.504 07:27:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:41.504 07:27:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:41.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:41.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:21:41.504 00:21:41.504 --- 10.0.0.2 ping statistics --- 00:21:41.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.504 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:21:41.504 07:27:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:41.504 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:41.504 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:21:41.504 00:21:41.504 --- 10.0.0.3 ping statistics --- 00:21:41.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.504 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:21:41.504 07:27:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:41.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:41.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:21:41.504 00:21:41.504 --- 10.0.0.1 ping statistics --- 00:21:41.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.504 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:21:41.504 07:27:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:41.504 07:27:43 -- nvmf/common.sh@421 -- # return 0 00:21:41.504 07:27:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:41.504 07:27:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:41.504 07:27:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:41.504 07:27:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:41.504 07:27:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:41.504 07:27:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:41.504 07:27:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:41.504 07:27:43 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:41.504 07:27:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:41.504 07:27:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:41.504 07:27:43 -- common/autotest_common.sh@10 -- # set +x 00:21:41.504 07:27:43 -- nvmf/common.sh@469 -- # nvmfpid=95897 00:21:41.504 07:27:43 -- nvmf/common.sh@470 -- # waitforlisten 95897 00:21:41.504 07:27:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:41.504 07:27:43 -- common/autotest_common.sh@819 -- # '[' -z 95897 ']' 00:21:41.504 07:27:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.504 07:27:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:41.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.504 07:27:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.504 07:27:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:41.504 07:27:43 -- common/autotest_common.sh@10 -- # set +x 00:21:41.763 [2024-11-04 07:27:43.370333] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:21:41.763 [2024-11-04 07:27:43.370422] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.763 [2024-11-04 07:27:43.510666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.763 [2024-11-04 07:27:43.587580] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:41.763 [2024-11-04 07:27:43.587720] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.763 [2024-11-04 07:27:43.587732] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.763 [2024-11-04 07:27:43.587741] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.763 [2024-11-04 07:27:43.587773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:42.700 07:27:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:42.700 07:27:44 -- common/autotest_common.sh@852 -- # return 0 00:21:42.700 07:27:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:42.700 07:27:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:42.700 07:27:44 -- common/autotest_common.sh@10 -- # set +x 00:21:42.700 07:27:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:42.700 07:27:44 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:42.700 07:27:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:42.700 07:27:44 -- common/autotest_common.sh@10 -- # set +x 00:21:42.700 [2024-11-04 07:27:44.459926] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:42.700 07:27:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:42.700 07:27:44 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:42.700 07:27:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:42.700 07:27:44 -- common/autotest_common.sh@10 -- # set +x 00:21:42.700 [2024-11-04 07:27:44.468018] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:42.700 07:27:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:42.700 07:27:44 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:42.700 07:27:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:42.700 07:27:44 -- common/autotest_common.sh@10 -- # set +x 00:21:42.700 null0 00:21:42.700 07:27:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:42.700 07:27:44 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:42.700 07:27:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:42.700 07:27:44 -- common/autotest_common.sh@10 -- # set +x 00:21:42.700 null1 00:21:42.700 07:27:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:42.700 07:27:44 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:42.700 07:27:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:42.700 07:27:44 -- common/autotest_common.sh@10 -- # set +x 00:21:42.700 07:27:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:42.700 07:27:44 -- host/discovery.sh@45 -- # hostpid=95947 00:21:42.700 07:27:44 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:42.700 07:27:44 -- host/discovery.sh@46 -- # waitforlisten 95947 /tmp/host.sock 00:21:42.700 07:27:44 -- common/autotest_common.sh@819 -- # '[' -z 95947 ']' 00:21:42.700 07:27:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:21:42.700 07:27:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:42.700 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:42.700 07:27:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:42.700 07:27:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:42.700 07:27:44 -- common/autotest_common.sh@10 -- # set +x 00:21:42.959 [2024-11-04 07:27:44.554240] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:21:42.959 [2024-11-04 07:27:44.554351] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95947 ] 00:21:42.959 [2024-11-04 07:27:44.696046] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.959 [2024-11-04 07:27:44.774093] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:42.959 [2024-11-04 07:27:44.774309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.921 07:27:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:43.921 07:27:45 -- common/autotest_common.sh@852 -- # return 0 00:21:43.922 07:27:45 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:43.922 07:27:45 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:43.922 07:27:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:43.922 07:27:45 -- common/autotest_common.sh@10 -- # set +x 00:21:43.922 07:27:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:43.922 07:27:45 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:43.922 07:27:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:43.922 07:27:45 -- common/autotest_common.sh@10 -- # set +x 00:21:43.922 07:27:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:43.922 07:27:45 -- host/discovery.sh@72 -- # notify_id=0 00:21:43.922 07:27:45 -- host/discovery.sh@78 -- # get_subsystem_names 00:21:43.922 07:27:45 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:43.922 07:27:45 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:43.922 07:27:45 -- host/discovery.sh@59 -- # xargs 00:21:43.922 07:27:45 -- host/discovery.sh@59 -- # sort 00:21:43.922 07:27:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:43.922 07:27:45 -- common/autotest_common.sh@10 -- # set +x 00:21:43.922 07:27:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:43.922 07:27:45 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:21:43.922 07:27:45 -- host/discovery.sh@79 -- # get_bdev_list 00:21:43.922 07:27:45 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:43.922 07:27:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:43.922 07:27:45 -- common/autotest_common.sh@10 -- # set +x 00:21:43.922 07:27:45 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:43.922 07:27:45 -- host/discovery.sh@55 -- # xargs 00:21:43.922 07:27:45 -- host/discovery.sh@55 -- # sort 00:21:43.922 07:27:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:43.922 07:27:45 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:21:43.922 07:27:45 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:43.922 07:27:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:43.922 07:27:45 -- common/autotest_common.sh@10 -- # set +x 00:21:43.922 07:27:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:43.922 07:27:45 -- host/discovery.sh@82 -- # get_subsystem_names 00:21:43.922 07:27:45 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:43.922 07:27:45 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:43.922 07:27:45 -- host/discovery.sh@59 -- # sort 00:21:43.922 07:27:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:43.922 07:27:45 -- common/autotest_common.sh@10 -- # set +x 00:21:43.922 07:27:45 -- host/discovery.sh@59 -- # xargs 00:21:43.922 07:27:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:43.922 07:27:45 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:21:43.922 07:27:45 -- host/discovery.sh@83 -- # get_bdev_list 00:21:43.922 07:27:45 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:43.922 07:27:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:43.922 07:27:45 -- common/autotest_common.sh@10 -- # set +x 00:21:43.922 07:27:45 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:43.922 07:27:45 -- host/discovery.sh@55 -- # sort 00:21:43.922 07:27:45 -- host/discovery.sh@55 -- # xargs 00:21:43.922 07:27:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:43.922 07:27:45 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:43.922 07:27:45 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:43.922 07:27:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:43.922 07:27:45 -- common/autotest_common.sh@10 -- # set +x 00:21:43.922 07:27:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:43.922 07:27:45 -- host/discovery.sh@86 -- # get_subsystem_names 00:21:44.181 07:27:45 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:44.181 07:27:45 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:44.181 07:27:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:44.181 07:27:45 -- host/discovery.sh@59 -- # sort 00:21:44.181 07:27:45 -- common/autotest_common.sh@10 -- # set +x 00:21:44.181 07:27:45 -- host/discovery.sh@59 -- # xargs 00:21:44.181 07:27:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:44.181 07:27:45 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:21:44.181 07:27:45 -- host/discovery.sh@87 -- # get_bdev_list 00:21:44.181 07:27:45 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:44.181 07:27:45 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:44.181 07:27:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:44.181 07:27:45 -- host/discovery.sh@55 -- # xargs 00:21:44.181 07:27:45 -- host/discovery.sh@55 -- # sort 00:21:44.181 07:27:45 -- common/autotest_common.sh@10 -- # set +x 00:21:44.181 07:27:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:44.181 07:27:45 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:44.181 07:27:45 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:44.181 07:27:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:44.181 07:27:45 -- common/autotest_common.sh@10 -- # set +x 00:21:44.181 [2024-11-04 07:27:45.876376] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:44.181 07:27:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:44.181 07:27:45 -- host/discovery.sh@92 -- # get_subsystem_names 00:21:44.181 07:27:45 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:44.181 07:27:45 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:44.181 07:27:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:44.181 07:27:45 -- host/discovery.sh@59 -- # sort 00:21:44.181 07:27:45 -- common/autotest_common.sh@10 -- # set +x 00:21:44.181 07:27:45 -- host/discovery.sh@59 -- # xargs 00:21:44.181 07:27:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:44.181 07:27:45 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:44.181 07:27:45 -- host/discovery.sh@93 -- # get_bdev_list 00:21:44.181 07:27:45 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:44.181 07:27:45 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:44.181 07:27:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:44.181 07:27:45 -- common/autotest_common.sh@10 -- # set +x 00:21:44.181 07:27:45 -- host/discovery.sh@55 -- # sort 00:21:44.181 07:27:45 -- host/discovery.sh@55 -- # xargs 00:21:44.181 07:27:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:44.181 07:27:45 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:21:44.181 07:27:45 -- host/discovery.sh@94 -- # get_notification_count 00:21:44.181 07:27:45 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:44.181 07:27:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:44.181 07:27:45 -- common/autotest_common.sh@10 -- # set +x 00:21:44.181 07:27:45 -- host/discovery.sh@74 -- # jq '. | length' 00:21:44.181 07:27:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:44.440 07:27:46 -- host/discovery.sh@74 -- # notification_count=0 00:21:44.440 07:27:46 -- host/discovery.sh@75 -- # notify_id=0 00:21:44.440 07:27:46 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:21:44.440 07:27:46 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:44.440 07:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:44.440 07:27:46 -- common/autotest_common.sh@10 -- # set +x 00:21:44.440 07:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:44.440 07:27:46 -- host/discovery.sh@100 -- # sleep 1 00:21:44.698 [2024-11-04 07:27:46.512838] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:44.698 [2024-11-04 07:27:46.512869] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:44.698 [2024-11-04 07:27:46.512909] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:44.956 [2024-11-04 07:27:46.600015] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:44.956 [2024-11-04 07:27:46.655828] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:44.956 [2024-11-04 07:27:46.655854] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:45.214 07:27:47 -- host/discovery.sh@101 -- # get_subsystem_names 00:21:45.214 07:27:47 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:45.215 07:27:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:45.215 07:27:47 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:45.215 07:27:47 -- common/autotest_common.sh@10 -- # set +x 00:21:45.215 07:27:47 -- host/discovery.sh@59 -- # sort 00:21:45.215 07:27:47 -- host/discovery.sh@59 -- # xargs 00:21:45.473 07:27:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:45.473 07:27:47 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.473 07:27:47 -- host/discovery.sh@102 -- # get_bdev_list 00:21:45.473 07:27:47 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:45.473 07:27:47 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:45.473 07:27:47 -- host/discovery.sh@55 -- # xargs 00:21:45.473 07:27:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:45.473 07:27:47 -- host/discovery.sh@55 -- # sort 00:21:45.473 07:27:47 -- common/autotest_common.sh@10 -- # set +x 00:21:45.473 07:27:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:45.473 07:27:47 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:45.473 07:27:47 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:21:45.473 07:27:47 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:45.473 07:27:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:45.473 07:27:47 -- common/autotest_common.sh@10 -- # set +x 00:21:45.473 07:27:47 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:45.473 07:27:47 -- host/discovery.sh@63 -- # sort -n 00:21:45.473 07:27:47 -- host/discovery.sh@63 -- # xargs 00:21:45.473 07:27:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:45.473 07:27:47 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:21:45.473 07:27:47 -- host/discovery.sh@104 -- # get_notification_count 00:21:45.473 07:27:47 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:45.473 07:27:47 -- host/discovery.sh@74 -- # jq '. | length' 00:21:45.473 07:27:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:45.473 07:27:47 -- common/autotest_common.sh@10 -- # set +x 00:21:45.473 07:27:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:45.473 07:27:47 -- host/discovery.sh@74 -- # notification_count=1 00:21:45.473 07:27:47 -- host/discovery.sh@75 -- # notify_id=1 00:21:45.473 07:27:47 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:21:45.473 07:27:47 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:45.473 07:27:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:45.473 07:27:47 -- common/autotest_common.sh@10 -- # set +x 00:21:45.473 07:27:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:45.473 07:27:47 -- host/discovery.sh@109 -- # sleep 1 00:21:46.849 07:27:48 -- host/discovery.sh@110 -- # get_bdev_list 00:21:46.849 07:27:48 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:46.849 07:27:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:46.849 07:27:48 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:46.849 07:27:48 -- common/autotest_common.sh@10 -- # set +x 00:21:46.849 07:27:48 -- host/discovery.sh@55 -- # sort 00:21:46.849 07:27:48 -- host/discovery.sh@55 -- # xargs 00:21:46.849 07:27:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:46.849 07:27:48 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:46.849 07:27:48 -- host/discovery.sh@111 -- # get_notification_count 00:21:46.849 07:27:48 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:46.849 07:27:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:46.849 07:27:48 -- host/discovery.sh@74 -- # jq '. | length' 00:21:46.849 07:27:48 -- common/autotest_common.sh@10 -- # set +x 00:21:46.849 07:27:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:46.849 07:27:48 -- host/discovery.sh@74 -- # notification_count=1 00:21:46.849 07:27:48 -- host/discovery.sh@75 -- # notify_id=2 00:21:46.849 07:27:48 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:21:46.849 07:27:48 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:46.850 07:27:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:46.850 07:27:48 -- common/autotest_common.sh@10 -- # set +x 00:21:46.850 [2024-11-04 07:27:48.393488] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:46.850 [2024-11-04 07:27:48.394050] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:46.850 [2024-11-04 07:27:48.394090] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:46.850 07:27:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:46.850 07:27:48 -- host/discovery.sh@117 -- # sleep 1 00:21:46.850 [2024-11-04 07:27:48.480101] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:46.850 [2024-11-04 07:27:48.537364] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:46.850 [2024-11-04 07:27:48.537388] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:46.850 [2024-11-04 07:27:48.537395] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:47.785 07:27:49 -- host/discovery.sh@118 -- # get_subsystem_names 00:21:47.785 07:27:49 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:47.785 07:27:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:47.785 07:27:49 -- common/autotest_common.sh@10 -- # set +x 00:21:47.785 07:27:49 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:47.785 07:27:49 -- host/discovery.sh@59 -- # sort 00:21:47.785 07:27:49 -- host/discovery.sh@59 -- # xargs 00:21:47.785 07:27:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:47.785 07:27:49 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.785 07:27:49 -- host/discovery.sh@119 -- # get_bdev_list 00:21:47.785 07:27:49 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:47.785 07:27:49 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:47.785 07:27:49 -- host/discovery.sh@55 -- # sort 00:21:47.785 07:27:49 -- host/discovery.sh@55 -- # xargs 00:21:47.785 07:27:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:47.785 07:27:49 -- common/autotest_common.sh@10 -- # set +x 00:21:47.785 07:27:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:47.785 07:27:49 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:47.785 07:27:49 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:21:47.785 07:27:49 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:47.785 07:27:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:47.785 07:27:49 -- common/autotest_common.sh@10 -- # set +x 00:21:47.785 07:27:49 -- host/discovery.sh@63 -- # xargs 00:21:47.785 07:27:49 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:47.785 07:27:49 -- host/discovery.sh@63 -- # sort -n 00:21:47.785 07:27:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:47.785 07:27:49 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:47.785 07:27:49 -- host/discovery.sh@121 -- # get_notification_count 00:21:47.785 07:27:49 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:47.785 07:27:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:47.785 07:27:49 -- host/discovery.sh@74 -- # jq '. | length' 00:21:47.785 07:27:49 -- common/autotest_common.sh@10 -- # set +x 00:21:47.785 07:27:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:47.785 07:27:49 -- host/discovery.sh@74 -- # notification_count=0 00:21:47.785 07:27:49 -- host/discovery.sh@75 -- # notify_id=2 00:21:47.785 07:27:49 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:21:47.785 07:27:49 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:47.785 07:27:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:47.785 07:27:49 -- common/autotest_common.sh@10 -- # set +x 00:21:48.043 [2024-11-04 07:27:49.626453] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:48.043 [2024-11-04 07:27:49.626498] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:48.043 07:27:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.043 07:27:49 -- host/discovery.sh@127 -- # sleep 1 00:21:48.043 [2024-11-04 07:27:49.633536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.043 [2024-11-04 07:27:49.633586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.043 [2024-11-04 07:27:49.633614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.044 [2024-11-04 07:27:49.633623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.044 [2024-11-04 07:27:49.633631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.044 [2024-11-04 07:27:49.633639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.044 [2024-11-04 07:27:49.633648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.044 [2024-11-04 07:27:49.633656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.044 [2024-11-04 07:27:49.633665] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15be570 is same with the state(5) to be set 00:21:48.044 [2024-11-04 07:27:49.643492] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15be570 (9): Bad file descriptor 00:21:48.044 [2024-11-04 07:27:49.653510] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:48.044 [2024-11-04 07:27:49.653633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.044 [2024-11-04 07:27:49.653680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.044 [2024-11-04 07:27:49.653695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15be570 with addr=10.0.0.2, port=4420 00:21:48.044 [2024-11-04 07:27:49.653705] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15be570 is same with the state(5) to be set 00:21:48.044 [2024-11-04 07:27:49.653720] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15be570 (9): Bad file descriptor 00:21:48.044 [2024-11-04 07:27:49.653743] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:48.044 [2024-11-04 07:27:49.653752] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:48.044 [2024-11-04 07:27:49.653761] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:48.044 [2024-11-04 07:27:49.653807] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.044 [2024-11-04 07:27:49.663591] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:48.044 [2024-11-04 07:27:49.663700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.044 [2024-11-04 07:27:49.663744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.044 [2024-11-04 07:27:49.663759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15be570 with addr=10.0.0.2, port=4420 00:21:48.044 [2024-11-04 07:27:49.663769] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15be570 is same with the state(5) to be set 00:21:48.044 [2024-11-04 07:27:49.663783] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15be570 (9): Bad file descriptor 00:21:48.044 [2024-11-04 07:27:49.663796] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:48.044 [2024-11-04 07:27:49.663804] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:48.044 [2024-11-04 07:27:49.663812] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:48.044 [2024-11-04 07:27:49.663826] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.044 [2024-11-04 07:27:49.673672] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:48.044 [2024-11-04 07:27:49.673784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.044 [2024-11-04 07:27:49.673826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.044 [2024-11-04 07:27:49.673840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15be570 with addr=10.0.0.2, port=4420 00:21:48.044 [2024-11-04 07:27:49.673849] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15be570 is same with the state(5) to be set 00:21:48.044 [2024-11-04 07:27:49.673872] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15be570 (9): Bad file descriptor 00:21:48.044 [2024-11-04 07:27:49.673930] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:48.044 [2024-11-04 07:27:49.673956] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:48.044 [2024-11-04 07:27:49.673980] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:48.044 [2024-11-04 07:27:49.673995] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.044 [2024-11-04 07:27:49.683753] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:48.044 [2024-11-04 07:27:49.683858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.044 [2024-11-04 07:27:49.683928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.044 [2024-11-04 07:27:49.683946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15be570 with addr=10.0.0.2, port=4420 00:21:48.044 [2024-11-04 07:27:49.683955] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15be570 is same with the state(5) to be set 00:21:48.044 [2024-11-04 07:27:49.683986] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15be570 (9): Bad file descriptor 00:21:48.044 [2024-11-04 07:27:49.683998] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:48.044 [2024-11-04 07:27:49.684006] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:48.044 [2024-11-04 07:27:49.684015] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:48.044 [2024-11-04 07:27:49.684044] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.044 [2024-11-04 07:27:49.693829] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:48.044 [2024-11-04 07:27:49.693939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.044 [2024-11-04 07:27:49.693980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.044 [2024-11-04 07:27:49.693994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15be570 with addr=10.0.0.2, port=4420 00:21:48.044 [2024-11-04 07:27:49.694004] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15be570 is same with the state(5) to be set 00:21:48.044 [2024-11-04 07:27:49.694017] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15be570 (9): Bad file descriptor 00:21:48.044 [2024-11-04 07:27:49.694029] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:48.044 [2024-11-04 07:27:49.694036] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:48.044 [2024-11-04 07:27:49.694044] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:48.044 [2024-11-04 07:27:49.694056] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.044 [2024-11-04 07:27:49.703878] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:48.044 [2024-11-04 07:27:49.703962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.044 [2024-11-04 07:27:49.704000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.044 [2024-11-04 07:27:49.704014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15be570 with addr=10.0.0.2, port=4420 00:21:48.044 [2024-11-04 07:27:49.704022] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15be570 is same with the state(5) to be set 00:21:48.044 [2024-11-04 07:27:49.704035] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15be570 (9): Bad file descriptor 00:21:48.044 [2024-11-04 07:27:49.704047] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:48.044 [2024-11-04 07:27:49.704054] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:48.044 [2024-11-04 07:27:49.704061] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:48.044 [2024-11-04 07:27:49.704073] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.044 [2024-11-04 07:27:49.712527] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:48.044 [2024-11-04 07:27:49.712553] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:48.979 07:27:50 -- host/discovery.sh@128 -- # get_subsystem_names 00:21:48.979 07:27:50 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:48.979 07:27:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.979 07:27:50 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:48.979 07:27:50 -- common/autotest_common.sh@10 -- # set +x 00:21:48.979 07:27:50 -- host/discovery.sh@59 -- # sort 00:21:48.979 07:27:50 -- host/discovery.sh@59 -- # xargs 00:21:48.979 07:27:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.979 07:27:50 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.979 07:27:50 -- host/discovery.sh@129 -- # get_bdev_list 00:21:48.979 07:27:50 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:48.979 07:27:50 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:48.979 07:27:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.979 07:27:50 -- host/discovery.sh@55 -- # sort 00:21:48.979 07:27:50 -- common/autotest_common.sh@10 -- # set +x 00:21:48.979 07:27:50 -- host/discovery.sh@55 -- # xargs 00:21:48.979 07:27:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.979 07:27:50 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:48.979 07:27:50 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:21:48.979 07:27:50 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:48.979 07:27:50 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:48.979 07:27:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.979 07:27:50 -- common/autotest_common.sh@10 -- # set +x 00:21:48.979 07:27:50 -- host/discovery.sh@63 -- # sort -n 00:21:48.979 07:27:50 -- host/discovery.sh@63 -- # xargs 00:21:48.979 07:27:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.979 07:27:50 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:21:48.979 07:27:50 -- host/discovery.sh@131 -- # get_notification_count 00:21:48.979 07:27:50 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:48.979 07:27:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.979 07:27:50 -- common/autotest_common.sh@10 -- # set +x 00:21:48.979 07:27:50 -- host/discovery.sh@74 -- # jq '. | length' 00:21:48.979 07:27:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.238 07:27:50 -- host/discovery.sh@74 -- # notification_count=0 00:21:49.238 07:27:50 -- host/discovery.sh@75 -- # notify_id=2 00:21:49.238 07:27:50 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:21:49.238 07:27:50 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:49.238 07:27:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.238 07:27:50 -- common/autotest_common.sh@10 -- # set +x 00:21:49.238 07:27:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.238 07:27:50 -- host/discovery.sh@135 -- # sleep 1 00:21:50.173 07:27:51 -- host/discovery.sh@136 -- # get_subsystem_names 00:21:50.173 07:27:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:50.173 07:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:50.173 07:27:51 -- common/autotest_common.sh@10 -- # set +x 00:21:50.173 07:27:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:50.173 07:27:51 -- host/discovery.sh@59 -- # sort 00:21:50.173 07:27:51 -- host/discovery.sh@59 -- # xargs 00:21:50.173 07:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:50.173 07:27:51 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:21:50.173 07:27:51 -- host/discovery.sh@137 -- # get_bdev_list 00:21:50.173 07:27:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:50.173 07:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:50.173 07:27:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:50.173 07:27:51 -- common/autotest_common.sh@10 -- # set +x 00:21:50.173 07:27:51 -- host/discovery.sh@55 -- # sort 00:21:50.173 07:27:51 -- host/discovery.sh@55 -- # xargs 00:21:50.173 07:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:50.173 07:27:51 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:21:50.173 07:27:51 -- host/discovery.sh@138 -- # get_notification_count 00:21:50.173 07:27:51 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:50.173 07:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:50.173 07:27:51 -- common/autotest_common.sh@10 -- # set +x 00:21:50.173 07:27:51 -- host/discovery.sh@74 -- # jq '. | length' 00:21:50.173 07:27:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:50.432 07:27:52 -- host/discovery.sh@74 -- # notification_count=2 00:21:50.432 07:27:52 -- host/discovery.sh@75 -- # notify_id=4 00:21:50.432 07:27:52 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:21:50.432 07:27:52 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:50.432 07:27:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:50.432 07:27:52 -- common/autotest_common.sh@10 -- # set +x 00:21:51.367 [2024-11-04 07:27:53.063597] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:51.367 [2024-11-04 07:27:53.063621] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:51.367 [2024-11-04 07:27:53.063637] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:51.367 [2024-11-04 07:27:53.149680] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:51.626 [2024-11-04 07:27:53.209071] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:51.626 [2024-11-04 07:27:53.209155] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:51.626 07:27:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:51.626 07:27:53 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:51.626 07:27:53 -- common/autotest_common.sh@640 -- # local es=0 00:21:51.626 07:27:53 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:51.626 07:27:53 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:21:51.626 07:27:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:51.626 07:27:53 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:21:51.626 07:27:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:51.626 07:27:53 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:51.626 07:27:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:51.626 07:27:53 -- common/autotest_common.sh@10 -- # set +x 00:21:51.626 2024/11/04 07:27:53 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:21:51.626 request: 00:21:51.626 { 00:21:51.626 "method": "bdev_nvme_start_discovery", 00:21:51.626 "params": { 00:21:51.626 "name": "nvme", 00:21:51.626 "trtype": "tcp", 00:21:51.626 "traddr": "10.0.0.2", 00:21:51.626 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:51.626 "adrfam": "ipv4", 00:21:51.626 "trsvcid": "8009", 00:21:51.626 "wait_for_attach": true 00:21:51.626 } 00:21:51.626 } 00:21:51.626 Got JSON-RPC error response 00:21:51.626 GoRPCClient: error on JSON-RPC call 00:21:51.626 07:27:53 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:21:51.626 07:27:53 -- common/autotest_common.sh@643 -- # es=1 00:21:51.626 07:27:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:51.626 07:27:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:51.626 07:27:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:51.626 07:27:53 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:21:51.626 07:27:53 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:51.626 07:27:53 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:51.626 07:27:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:51.626 07:27:53 -- common/autotest_common.sh@10 -- # set +x 00:21:51.626 07:27:53 -- host/discovery.sh@67 -- # sort 00:21:51.626 07:27:53 -- host/discovery.sh@67 -- # xargs 00:21:51.626 07:27:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:51.626 07:27:53 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:21:51.626 07:27:53 -- host/discovery.sh@147 -- # get_bdev_list 00:21:51.626 07:27:53 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:51.626 07:27:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:51.626 07:27:53 -- common/autotest_common.sh@10 -- # set +x 00:21:51.626 07:27:53 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:51.626 07:27:53 -- host/discovery.sh@55 -- # sort 00:21:51.626 07:27:53 -- host/discovery.sh@55 -- # xargs 00:21:51.626 07:27:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:51.626 07:27:53 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:51.626 07:27:53 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:51.626 07:27:53 -- common/autotest_common.sh@640 -- # local es=0 00:21:51.626 07:27:53 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:51.626 07:27:53 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:21:51.626 07:27:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:51.626 07:27:53 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:21:51.626 07:27:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:51.626 07:27:53 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:51.626 07:27:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:51.627 07:27:53 -- common/autotest_common.sh@10 -- # set +x 00:21:51.627 2024/11/04 07:27:53 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:21:51.627 request: 00:21:51.627 { 00:21:51.627 "method": "bdev_nvme_start_discovery", 00:21:51.627 "params": { 00:21:51.627 "name": "nvme_second", 00:21:51.627 "trtype": "tcp", 00:21:51.627 "traddr": "10.0.0.2", 00:21:51.627 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:51.627 "adrfam": "ipv4", 00:21:51.627 "trsvcid": "8009", 00:21:51.627 "wait_for_attach": true 00:21:51.627 } 00:21:51.627 } 00:21:51.627 Got JSON-RPC error response 00:21:51.627 GoRPCClient: error on JSON-RPC call 00:21:51.627 07:27:53 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:21:51.627 07:27:53 -- common/autotest_common.sh@643 -- # es=1 00:21:51.627 07:27:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:51.627 07:27:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:51.627 07:27:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:51.627 07:27:53 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:21:51.627 07:27:53 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:51.627 07:27:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:51.627 07:27:53 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:51.627 07:27:53 -- host/discovery.sh@67 -- # sort 00:21:51.627 07:27:53 -- common/autotest_common.sh@10 -- # set +x 00:21:51.627 07:27:53 -- host/discovery.sh@67 -- # xargs 00:21:51.627 07:27:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:51.627 07:27:53 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:21:51.627 07:27:53 -- host/discovery.sh@153 -- # get_bdev_list 00:21:51.627 07:27:53 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:51.627 07:27:53 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:51.627 07:27:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:51.627 07:27:53 -- host/discovery.sh@55 -- # sort 00:21:51.627 07:27:53 -- host/discovery.sh@55 -- # xargs 00:21:51.627 07:27:53 -- common/autotest_common.sh@10 -- # set +x 00:21:51.627 07:27:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:51.885 07:27:53 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:51.885 07:27:53 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:51.885 07:27:53 -- common/autotest_common.sh@640 -- # local es=0 00:21:51.885 07:27:53 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:51.885 07:27:53 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:21:51.885 07:27:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:51.885 07:27:53 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:21:51.885 07:27:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:51.885 07:27:53 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:51.885 07:27:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:51.885 07:27:53 -- common/autotest_common.sh@10 -- # set +x 00:21:52.821 [2024-11-04 07:27:54.478987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:52.821 [2024-11-04 07:27:54.479073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:52.821 [2024-11-04 07:27:54.479091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1659f80 with addr=10.0.0.2, port=8010 00:21:52.821 [2024-11-04 07:27:54.479106] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:52.821 [2024-11-04 07:27:54.479115] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:52.821 [2024-11-04 07:27:54.479122] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:53.756 [2024-11-04 07:27:55.478973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:53.756 [2024-11-04 07:27:55.479059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:53.756 [2024-11-04 07:27:55.479077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1632ca0 with addr=10.0.0.2, port=8010 00:21:53.756 [2024-11-04 07:27:55.479090] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:53.756 [2024-11-04 07:27:55.479099] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:53.756 [2024-11-04 07:27:55.479107] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:54.692 [2024-11-04 07:27:56.478896] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:21:54.692 2024/11/04 07:27:56 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:21:54.692 request: 00:21:54.692 { 00:21:54.692 "method": "bdev_nvme_start_discovery", 00:21:54.692 "params": { 00:21:54.692 "name": "nvme_second", 00:21:54.692 "trtype": "tcp", 00:21:54.692 "traddr": "10.0.0.2", 00:21:54.692 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:54.692 "adrfam": "ipv4", 00:21:54.692 "trsvcid": "8010", 00:21:54.692 "attach_timeout_ms": 3000 00:21:54.692 } 00:21:54.692 } 00:21:54.692 Got JSON-RPC error response 00:21:54.692 GoRPCClient: error on JSON-RPC call 00:21:54.692 07:27:56 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:21:54.692 07:27:56 -- common/autotest_common.sh@643 -- # es=1 00:21:54.692 07:27:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:54.692 07:27:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:54.692 07:27:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:54.692 07:27:56 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:21:54.692 07:27:56 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:54.692 07:27:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:54.692 07:27:56 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:54.692 07:27:56 -- common/autotest_common.sh@10 -- # set +x 00:21:54.692 07:27:56 -- host/discovery.sh@67 -- # sort 00:21:54.692 07:27:56 -- host/discovery.sh@67 -- # xargs 00:21:54.692 07:27:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:54.951 07:27:56 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:21:54.951 07:27:56 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:21:54.951 07:27:56 -- host/discovery.sh@162 -- # kill 95947 00:21:54.951 07:27:56 -- host/discovery.sh@163 -- # nvmftestfini 00:21:54.951 07:27:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:54.951 07:27:56 -- nvmf/common.sh@116 -- # sync 00:21:54.951 07:27:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:54.951 07:27:56 -- nvmf/common.sh@119 -- # set +e 00:21:54.951 07:27:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:54.951 07:27:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:54.951 rmmod nvme_tcp 00:21:54.951 rmmod nvme_fabrics 00:21:54.951 rmmod nvme_keyring 00:21:54.951 07:27:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:54.951 07:27:56 -- nvmf/common.sh@123 -- # set -e 00:21:54.951 07:27:56 -- nvmf/common.sh@124 -- # return 0 00:21:54.951 07:27:56 -- nvmf/common.sh@477 -- # '[' -n 95897 ']' 00:21:54.951 07:27:56 -- nvmf/common.sh@478 -- # killprocess 95897 00:21:54.951 07:27:56 -- common/autotest_common.sh@926 -- # '[' -z 95897 ']' 00:21:54.951 07:27:56 -- common/autotest_common.sh@930 -- # kill -0 95897 00:21:54.951 07:27:56 -- common/autotest_common.sh@931 -- # uname 00:21:54.951 07:27:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:54.951 07:27:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 95897 00:21:54.951 07:27:56 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:54.951 killing process with pid 95897 00:21:54.951 07:27:56 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:54.951 07:27:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 95897' 00:21:54.951 07:27:56 -- common/autotest_common.sh@945 -- # kill 95897 00:21:54.951 07:27:56 -- common/autotest_common.sh@950 -- # wait 95897 00:21:55.210 07:27:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:55.210 07:27:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:55.210 07:27:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:55.210 07:27:56 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:55.210 07:27:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:55.210 07:27:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.210 07:27:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:55.210 07:27:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.210 07:27:56 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:55.210 00:21:55.210 real 0m14.101s 00:21:55.210 user 0m27.555s 00:21:55.210 sys 0m1.745s 00:21:55.210 07:27:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:55.210 07:27:56 -- common/autotest_common.sh@10 -- # set +x 00:21:55.210 ************************************ 00:21:55.210 END TEST nvmf_discovery 00:21:55.210 ************************************ 00:21:55.210 07:27:57 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:55.210 07:27:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:55.210 07:27:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:55.210 07:27:57 -- common/autotest_common.sh@10 -- # set +x 00:21:55.210 ************************************ 00:21:55.210 START TEST nvmf_discovery_remove_ifc 00:21:55.210 ************************************ 00:21:55.210 07:27:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:55.469 * Looking for test storage... 00:21:55.469 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:55.469 07:27:57 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:55.469 07:27:57 -- nvmf/common.sh@7 -- # uname -s 00:21:55.469 07:27:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:55.469 07:27:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:55.469 07:27:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:55.469 07:27:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:55.469 07:27:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:55.469 07:27:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:55.469 07:27:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:55.469 07:27:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:55.469 07:27:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:55.469 07:27:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:55.469 07:27:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:21:55.469 07:27:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:21:55.469 07:27:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:55.469 07:27:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:55.469 07:27:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:55.469 07:27:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:55.469 07:27:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:55.469 07:27:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:55.469 07:27:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:55.469 07:27:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.470 07:27:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.470 07:27:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.470 07:27:57 -- paths/export.sh@5 -- # export PATH 00:21:55.470 07:27:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.470 07:27:57 -- nvmf/common.sh@46 -- # : 0 00:21:55.470 07:27:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:55.470 07:27:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:55.470 07:27:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:55.470 07:27:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:55.470 07:27:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:55.470 07:27:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:55.470 07:27:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:55.470 07:27:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:55.470 07:27:57 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:21:55.470 07:27:57 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:21:55.470 07:27:57 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:21:55.470 07:27:57 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:21:55.470 07:27:57 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:21:55.470 07:27:57 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:21:55.470 07:27:57 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:21:55.470 07:27:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:55.470 07:27:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:55.470 07:27:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:55.470 07:27:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:55.470 07:27:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:55.470 07:27:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.470 07:27:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:55.470 07:27:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.470 07:27:57 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:55.470 07:27:57 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:55.470 07:27:57 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:55.470 07:27:57 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:55.470 07:27:57 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:55.470 07:27:57 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:55.470 07:27:57 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:55.470 07:27:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:55.470 07:27:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:55.470 07:27:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:55.470 07:27:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:55.470 07:27:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:55.470 07:27:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:55.470 07:27:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:55.470 07:27:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:55.470 07:27:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:55.470 07:27:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:55.470 07:27:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:55.470 07:27:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:55.470 07:27:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:55.470 Cannot find device "nvmf_tgt_br" 00:21:55.470 07:27:57 -- nvmf/common.sh@154 -- # true 00:21:55.470 07:27:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:55.470 Cannot find device "nvmf_tgt_br2" 00:21:55.470 07:27:57 -- nvmf/common.sh@155 -- # true 00:21:55.470 07:27:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:55.470 07:27:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:55.470 Cannot find device "nvmf_tgt_br" 00:21:55.470 07:27:57 -- nvmf/common.sh@157 -- # true 00:21:55.470 07:27:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:55.470 Cannot find device "nvmf_tgt_br2" 00:21:55.470 07:27:57 -- nvmf/common.sh@158 -- # true 00:21:55.470 07:27:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:55.470 07:27:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:55.470 07:27:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:55.470 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:55.470 07:27:57 -- nvmf/common.sh@161 -- # true 00:21:55.470 07:27:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:55.470 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:55.470 07:27:57 -- nvmf/common.sh@162 -- # true 00:21:55.470 07:27:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:55.470 07:27:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:55.470 07:27:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:55.470 07:27:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:55.729 07:27:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:55.729 07:27:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:55.729 07:27:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:55.729 07:27:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:55.729 07:27:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:55.729 07:27:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:55.729 07:27:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:55.729 07:27:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:55.729 07:27:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:55.729 07:27:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:55.729 07:27:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:55.729 07:27:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:55.729 07:27:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:55.729 07:27:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:55.729 07:27:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:55.729 07:27:57 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:55.729 07:27:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:55.729 07:27:57 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:55.729 07:27:57 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:55.729 07:27:57 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:55.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:55.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:21:55.729 00:21:55.729 --- 10.0.0.2 ping statistics --- 00:21:55.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.729 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:21:55.729 07:27:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:55.729 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:55.729 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:21:55.729 00:21:55.729 --- 10.0.0.3 ping statistics --- 00:21:55.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.729 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:21:55.729 07:27:57 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:55.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:55.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:21:55.729 00:21:55.729 --- 10.0.0.1 ping statistics --- 00:21:55.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.729 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:21:55.729 07:27:57 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:55.729 07:27:57 -- nvmf/common.sh@421 -- # return 0 00:21:55.729 07:27:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:55.729 07:27:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:55.729 07:27:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:55.729 07:27:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:55.729 07:27:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:55.729 07:27:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:55.729 07:27:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:55.729 07:27:57 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:21:55.729 07:27:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:55.729 07:27:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:55.729 07:27:57 -- common/autotest_common.sh@10 -- # set +x 00:21:55.729 07:27:57 -- nvmf/common.sh@469 -- # nvmfpid=96453 00:21:55.729 07:27:57 -- nvmf/common.sh@470 -- # waitforlisten 96453 00:21:55.729 07:27:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:55.729 07:27:57 -- common/autotest_common.sh@819 -- # '[' -z 96453 ']' 00:21:55.729 07:27:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.729 07:27:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:55.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.729 07:27:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.729 07:27:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:55.729 07:27:57 -- common/autotest_common.sh@10 -- # set +x 00:21:55.988 [2024-11-04 07:27:57.573864] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:21:55.988 [2024-11-04 07:27:57.573959] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:55.988 [2024-11-04 07:27:57.711721] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.988 [2024-11-04 07:27:57.798770] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:55.988 [2024-11-04 07:27:57.798952] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:55.988 [2024-11-04 07:27:57.798967] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:55.988 [2024-11-04 07:27:57.798975] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:55.988 [2024-11-04 07:27:57.799001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:56.924 07:27:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:56.924 07:27:58 -- common/autotest_common.sh@852 -- # return 0 00:21:56.924 07:27:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:56.924 07:27:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:56.924 07:27:58 -- common/autotest_common.sh@10 -- # set +x 00:21:56.924 07:27:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:56.924 07:27:58 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:21:56.924 07:27:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:56.924 07:27:58 -- common/autotest_common.sh@10 -- # set +x 00:21:56.924 [2024-11-04 07:27:58.641279] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:56.924 [2024-11-04 07:27:58.649434] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:56.924 null0 00:21:56.924 [2024-11-04 07:27:58.681375] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:56.924 07:27:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:56.924 07:27:58 -- host/discovery_remove_ifc.sh@59 -- # hostpid=96503 00:21:56.924 07:27:58 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:21:56.924 07:27:58 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 96503 /tmp/host.sock 00:21:56.924 07:27:58 -- common/autotest_common.sh@819 -- # '[' -z 96503 ']' 00:21:56.924 07:27:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:21:56.924 07:27:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:56.924 07:27:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:56.924 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:56.924 07:27:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:56.924 07:27:58 -- common/autotest_common.sh@10 -- # set +x 00:21:56.924 [2024-11-04 07:27:58.757841] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:21:56.924 [2024-11-04 07:27:58.757953] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96503 ] 00:21:57.183 [2024-11-04 07:27:58.901422] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.183 [2024-11-04 07:27:58.970622] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:57.183 [2024-11-04 07:27:58.970810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.119 07:27:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:58.119 07:27:59 -- common/autotest_common.sh@852 -- # return 0 00:21:58.119 07:27:59 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:58.119 07:27:59 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:21:58.119 07:27:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.119 07:27:59 -- common/autotest_common.sh@10 -- # set +x 00:21:58.119 07:27:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.119 07:27:59 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:21:58.119 07:27:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.119 07:27:59 -- common/autotest_common.sh@10 -- # set +x 00:21:58.119 07:27:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.119 07:27:59 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:21:58.119 07:27:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.119 07:27:59 -- common/autotest_common.sh@10 -- # set +x 00:21:59.054 [2024-11-04 07:28:00.734251] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:59.054 [2024-11-04 07:28:00.734281] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:59.055 [2024-11-04 07:28:00.734298] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:59.055 [2024-11-04 07:28:00.820343] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:59.055 [2024-11-04 07:28:00.875743] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:59.055 [2024-11-04 07:28:00.875791] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:59.055 [2024-11-04 07:28:00.875816] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:59.055 [2024-11-04 07:28:00.875830] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:59.055 [2024-11-04 07:28:00.875848] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:59.055 07:28:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.055 07:28:00 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:21:59.055 07:28:00 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:59.055 07:28:00 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:59.055 [2024-11-04 07:28:00.882837] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x10e4da0 was disconnected and freed. delete nvme_qpair. 00:21:59.055 07:28:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.055 07:28:00 -- common/autotest_common.sh@10 -- # set +x 00:21:59.055 07:28:00 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:59.055 07:28:00 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:59.055 07:28:00 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:59.313 07:28:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.313 07:28:00 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:21:59.313 07:28:00 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:21:59.313 07:28:00 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:21:59.313 07:28:00 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:21:59.313 07:28:00 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:59.313 07:28:00 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:59.313 07:28:00 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:59.313 07:28:00 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:59.313 07:28:00 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:59.313 07:28:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.313 07:28:00 -- common/autotest_common.sh@10 -- # set +x 00:21:59.313 07:28:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.313 07:28:01 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:59.313 07:28:01 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:00.248 07:28:02 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:00.248 07:28:02 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:00.248 07:28:02 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:00.248 07:28:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.248 07:28:02 -- common/autotest_common.sh@10 -- # set +x 00:22:00.248 07:28:02 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:00.248 07:28:02 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:00.248 07:28:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.248 07:28:02 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:00.248 07:28:02 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:01.625 07:28:03 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:01.625 07:28:03 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:01.625 07:28:03 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:01.625 07:28:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.625 07:28:03 -- common/autotest_common.sh@10 -- # set +x 00:22:01.625 07:28:03 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:01.625 07:28:03 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:01.625 07:28:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.625 07:28:03 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:01.625 07:28:03 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:02.612 07:28:04 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:02.612 07:28:04 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:02.612 07:28:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:02.612 07:28:04 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:02.612 07:28:04 -- common/autotest_common.sh@10 -- # set +x 00:22:02.612 07:28:04 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:02.612 07:28:04 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:02.612 07:28:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:02.612 07:28:04 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:02.612 07:28:04 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:03.564 07:28:05 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:03.564 07:28:05 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:03.564 07:28:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:03.564 07:28:05 -- common/autotest_common.sh@10 -- # set +x 00:22:03.564 07:28:05 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:03.564 07:28:05 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:03.564 07:28:05 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:03.564 07:28:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:03.564 07:28:05 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:03.564 07:28:05 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:04.505 07:28:06 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:04.505 07:28:06 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:04.506 07:28:06 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:04.506 07:28:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:04.506 07:28:06 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:04.506 07:28:06 -- common/autotest_common.sh@10 -- # set +x 00:22:04.506 07:28:06 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:04.506 07:28:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:04.506 [2024-11-04 07:28:06.304124] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:04.506 [2024-11-04 07:28:06.304175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:04.506 [2024-11-04 07:28:06.304198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.506 [2024-11-04 07:28:06.304210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:04.506 [2024-11-04 07:28:06.304234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.506 [2024-11-04 07:28:06.304242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:04.506 [2024-11-04 07:28:06.304256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.506 [2024-11-04 07:28:06.304265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:04.506 [2024-11-04 07:28:06.304288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.506 [2024-11-04 07:28:06.304296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:04.506 [2024-11-04 07:28:06.304303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.506 [2024-11-04 07:28:06.304311] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104e690 is same with the state(5) to be set 00:22:04.506 07:28:06 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:04.506 07:28:06 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:04.506 [2024-11-04 07:28:06.314120] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x104e690 (9): Bad file descriptor 00:22:04.506 [2024-11-04 07:28:06.324140] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:05.881 07:28:07 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:05.881 07:28:07 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:05.881 07:28:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:05.881 07:28:07 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:05.881 07:28:07 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:05.881 07:28:07 -- common/autotest_common.sh@10 -- # set +x 00:22:05.881 07:28:07 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:05.881 [2024-11-04 07:28:07.377978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:06.818 [2024-11-04 07:28:08.401995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:06.818 [2024-11-04 07:28:08.402095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x104e690 with addr=10.0.0.2, port=4420 00:22:06.818 [2024-11-04 07:28:08.402127] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104e690 is same with the state(5) to be set 00:22:06.818 [2024-11-04 07:28:08.402176] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:06.818 [2024-11-04 07:28:08.402198] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:06.818 [2024-11-04 07:28:08.402223] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:06.818 [2024-11-04 07:28:08.402243] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:06.818 [2024-11-04 07:28:08.403040] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x104e690 (9): Bad file descriptor 00:22:06.818 [2024-11-04 07:28:08.403119] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.818 [2024-11-04 07:28:08.403175] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:06.818 [2024-11-04 07:28:08.403244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.818 [2024-11-04 07:28:08.403275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.818 [2024-11-04 07:28:08.403300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.818 [2024-11-04 07:28:08.403320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.818 [2024-11-04 07:28:08.403341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.818 [2024-11-04 07:28:08.403361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.818 [2024-11-04 07:28:08.403382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.818 [2024-11-04 07:28:08.403401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.818 [2024-11-04 07:28:08.403423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.818 [2024-11-04 07:28:08.403442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.818 [2024-11-04 07:28:08.403461] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:06.818 [2024-11-04 07:28:08.403523] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ac410 (9): Bad file descriptor 00:22:06.818 [2024-11-04 07:28:08.404523] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:06.818 [2024-11-04 07:28:08.404582] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:06.818 07:28:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.818 07:28:08 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:06.818 07:28:08 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:07.754 07:28:09 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:07.754 07:28:09 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:07.754 07:28:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:07.754 07:28:09 -- common/autotest_common.sh@10 -- # set +x 00:22:07.754 07:28:09 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:07.754 07:28:09 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:07.754 07:28:09 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:07.754 07:28:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:07.754 07:28:09 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:07.754 07:28:09 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:07.754 07:28:09 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:07.754 07:28:09 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:07.754 07:28:09 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:07.754 07:28:09 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:07.754 07:28:09 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:07.754 07:28:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:07.754 07:28:09 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:07.754 07:28:09 -- common/autotest_common.sh@10 -- # set +x 00:22:07.754 07:28:09 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:07.754 07:28:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:07.754 07:28:09 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:07.754 07:28:09 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:08.693 [2024-11-04 07:28:10.415566] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:08.693 [2024-11-04 07:28:10.415588] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:08.693 [2024-11-04 07:28:10.415604] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:08.693 [2024-11-04 07:28:10.501657] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:08.952 [2024-11-04 07:28:10.556460] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:08.952 [2024-11-04 07:28:10.556501] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:08.952 [2024-11-04 07:28:10.556521] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:08.952 [2024-11-04 07:28:10.556535] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:08.952 [2024-11-04 07:28:10.556543] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:08.952 07:28:10 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:08.952 07:28:10 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:08.952 07:28:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:08.952 07:28:10 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:08.952 07:28:10 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:08.952 07:28:10 -- common/autotest_common.sh@10 -- # set +x 00:22:08.952 [2024-11-04 07:28:10.564222] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x10b20c0 was disconnected and freed. delete nvme_qpair. 00:22:08.952 07:28:10 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:08.952 07:28:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:08.952 07:28:10 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:08.952 07:28:10 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:08.952 07:28:10 -- host/discovery_remove_ifc.sh@90 -- # killprocess 96503 00:22:08.952 07:28:10 -- common/autotest_common.sh@926 -- # '[' -z 96503 ']' 00:22:08.952 07:28:10 -- common/autotest_common.sh@930 -- # kill -0 96503 00:22:08.952 07:28:10 -- common/autotest_common.sh@931 -- # uname 00:22:08.952 07:28:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:08.952 07:28:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96503 00:22:08.952 killing process with pid 96503 00:22:08.952 07:28:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:08.952 07:28:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:08.952 07:28:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96503' 00:22:08.952 07:28:10 -- common/autotest_common.sh@945 -- # kill 96503 00:22:08.952 07:28:10 -- common/autotest_common.sh@950 -- # wait 96503 00:22:09.211 07:28:10 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:09.211 07:28:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:09.211 07:28:10 -- nvmf/common.sh@116 -- # sync 00:22:09.211 07:28:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:09.211 07:28:10 -- nvmf/common.sh@119 -- # set +e 00:22:09.211 07:28:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:09.211 07:28:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:09.211 rmmod nvme_tcp 00:22:09.211 rmmod nvme_fabrics 00:22:09.211 rmmod nvme_keyring 00:22:09.211 07:28:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:09.211 07:28:10 -- nvmf/common.sh@123 -- # set -e 00:22:09.211 07:28:10 -- nvmf/common.sh@124 -- # return 0 00:22:09.211 07:28:10 -- nvmf/common.sh@477 -- # '[' -n 96453 ']' 00:22:09.211 07:28:10 -- nvmf/common.sh@478 -- # killprocess 96453 00:22:09.211 07:28:10 -- common/autotest_common.sh@926 -- # '[' -z 96453 ']' 00:22:09.211 07:28:10 -- common/autotest_common.sh@930 -- # kill -0 96453 00:22:09.211 07:28:10 -- common/autotest_common.sh@931 -- # uname 00:22:09.211 07:28:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:09.211 07:28:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96453 00:22:09.211 killing process with pid 96453 00:22:09.211 07:28:10 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:09.211 07:28:10 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:09.211 07:28:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96453' 00:22:09.211 07:28:10 -- common/autotest_common.sh@945 -- # kill 96453 00:22:09.211 07:28:10 -- common/autotest_common.sh@950 -- # wait 96453 00:22:09.470 07:28:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:09.470 07:28:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:09.470 07:28:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:09.470 07:28:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:09.470 07:28:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:09.470 07:28:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.470 07:28:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:09.470 07:28:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.470 07:28:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:09.470 00:22:09.470 real 0m14.227s 00:22:09.470 user 0m24.266s 00:22:09.470 sys 0m1.575s 00:22:09.470 07:28:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:09.470 07:28:11 -- common/autotest_common.sh@10 -- # set +x 00:22:09.470 ************************************ 00:22:09.470 END TEST nvmf_discovery_remove_ifc 00:22:09.470 ************************************ 00:22:09.470 07:28:11 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:22:09.470 07:28:11 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:09.470 07:28:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:09.470 07:28:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:09.470 07:28:11 -- common/autotest_common.sh@10 -- # set +x 00:22:09.729 ************************************ 00:22:09.729 START TEST nvmf_digest 00:22:09.729 ************************************ 00:22:09.729 07:28:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:09.729 * Looking for test storage... 00:22:09.729 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:09.729 07:28:11 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:09.729 07:28:11 -- nvmf/common.sh@7 -- # uname -s 00:22:09.729 07:28:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:09.729 07:28:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:09.729 07:28:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:09.729 07:28:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:09.729 07:28:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:09.729 07:28:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:09.729 07:28:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:09.729 07:28:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:09.729 07:28:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:09.729 07:28:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:09.729 07:28:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:22:09.729 07:28:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:22:09.729 07:28:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:09.729 07:28:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:09.729 07:28:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:09.729 07:28:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:09.729 07:28:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:09.729 07:28:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:09.729 07:28:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:09.729 07:28:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.729 07:28:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.729 07:28:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.729 07:28:11 -- paths/export.sh@5 -- # export PATH 00:22:09.729 07:28:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.729 07:28:11 -- nvmf/common.sh@46 -- # : 0 00:22:09.729 07:28:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:09.729 07:28:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:09.729 07:28:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:09.729 07:28:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:09.729 07:28:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:09.729 07:28:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:09.729 07:28:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:09.729 07:28:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:09.729 07:28:11 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:09.729 07:28:11 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:22:09.729 07:28:11 -- host/digest.sh@16 -- # runtime=2 00:22:09.729 07:28:11 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:22:09.729 07:28:11 -- host/digest.sh@132 -- # nvmftestinit 00:22:09.729 07:28:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:09.729 07:28:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:09.729 07:28:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:09.729 07:28:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:09.729 07:28:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:09.729 07:28:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.729 07:28:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:09.729 07:28:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.729 07:28:11 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:09.729 07:28:11 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:09.729 07:28:11 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:09.729 07:28:11 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:09.729 07:28:11 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:09.729 07:28:11 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:09.729 07:28:11 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:09.729 07:28:11 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:09.729 07:28:11 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:09.729 07:28:11 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:09.729 07:28:11 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:09.729 07:28:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:09.729 07:28:11 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:09.729 07:28:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:09.729 07:28:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:09.729 07:28:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:09.729 07:28:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:09.729 07:28:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:09.729 07:28:11 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:09.729 07:28:11 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:09.729 Cannot find device "nvmf_tgt_br" 00:22:09.729 07:28:11 -- nvmf/common.sh@154 -- # true 00:22:09.729 07:28:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:09.729 Cannot find device "nvmf_tgt_br2" 00:22:09.729 07:28:11 -- nvmf/common.sh@155 -- # true 00:22:09.729 07:28:11 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:09.729 07:28:11 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:09.729 Cannot find device "nvmf_tgt_br" 00:22:09.729 07:28:11 -- nvmf/common.sh@157 -- # true 00:22:09.729 07:28:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:09.729 Cannot find device "nvmf_tgt_br2" 00:22:09.729 07:28:11 -- nvmf/common.sh@158 -- # true 00:22:09.729 07:28:11 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:09.729 07:28:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:09.988 07:28:11 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:09.988 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:09.988 07:28:11 -- nvmf/common.sh@161 -- # true 00:22:09.988 07:28:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:09.988 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:09.988 07:28:11 -- nvmf/common.sh@162 -- # true 00:22:09.988 07:28:11 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:09.988 07:28:11 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:09.988 07:28:11 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:09.988 07:28:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:09.988 07:28:11 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:09.988 07:28:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:09.988 07:28:11 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:09.988 07:28:11 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:09.988 07:28:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:09.988 07:28:11 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:09.988 07:28:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:09.988 07:28:11 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:09.988 07:28:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:09.988 07:28:11 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:09.988 07:28:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:09.988 07:28:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:09.988 07:28:11 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:09.988 07:28:11 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:09.988 07:28:11 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:09.988 07:28:11 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:09.988 07:28:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:09.988 07:28:11 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:09.988 07:28:11 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:09.988 07:28:11 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:09.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:09.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:22:09.988 00:22:09.988 --- 10.0.0.2 ping statistics --- 00:22:09.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.989 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:22:09.989 07:28:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:09.989 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:09.989 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:22:09.989 00:22:09.989 --- 10.0.0.3 ping statistics --- 00:22:09.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.989 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:22:09.989 07:28:11 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:09.989 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:09.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:22:09.989 00:22:09.989 --- 10.0.0.1 ping statistics --- 00:22:09.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.989 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:22:09.989 07:28:11 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:09.989 07:28:11 -- nvmf/common.sh@421 -- # return 0 00:22:09.989 07:28:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:09.989 07:28:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:09.989 07:28:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:09.989 07:28:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:09.989 07:28:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:09.989 07:28:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:09.989 07:28:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:09.989 07:28:11 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:09.989 07:28:11 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:22:09.989 07:28:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:09.989 07:28:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:09.989 07:28:11 -- common/autotest_common.sh@10 -- # set +x 00:22:09.989 ************************************ 00:22:09.989 START TEST nvmf_digest_clean 00:22:09.989 ************************************ 00:22:09.989 07:28:11 -- common/autotest_common.sh@1104 -- # run_digest 00:22:09.989 07:28:11 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:22:09.989 07:28:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:09.989 07:28:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:09.989 07:28:11 -- common/autotest_common.sh@10 -- # set +x 00:22:09.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.989 07:28:11 -- nvmf/common.sh@469 -- # nvmfpid=96915 00:22:09.989 07:28:11 -- nvmf/common.sh@470 -- # waitforlisten 96915 00:22:09.989 07:28:11 -- common/autotest_common.sh@819 -- # '[' -z 96915 ']' 00:22:09.989 07:28:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:09.989 07:28:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.989 07:28:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:09.989 07:28:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.989 07:28:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:09.989 07:28:11 -- common/autotest_common.sh@10 -- # set +x 00:22:10.248 [2024-11-04 07:28:11.853066] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:22:10.248 [2024-11-04 07:28:11.853147] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:10.248 [2024-11-04 07:28:11.997226] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.248 [2024-11-04 07:28:12.070209] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:10.248 [2024-11-04 07:28:12.070375] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:10.248 [2024-11-04 07:28:12.070392] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:10.248 [2024-11-04 07:28:12.070403] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:10.248 [2024-11-04 07:28:12.070439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.183 07:28:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:11.184 07:28:12 -- common/autotest_common.sh@852 -- # return 0 00:22:11.184 07:28:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:11.184 07:28:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:11.184 07:28:12 -- common/autotest_common.sh@10 -- # set +x 00:22:11.184 07:28:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:11.184 07:28:12 -- host/digest.sh@120 -- # common_target_config 00:22:11.184 07:28:12 -- host/digest.sh@43 -- # rpc_cmd 00:22:11.184 07:28:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:11.184 07:28:12 -- common/autotest_common.sh@10 -- # set +x 00:22:11.184 null0 00:22:11.184 [2024-11-04 07:28:12.978093] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:11.184 [2024-11-04 07:28:13.002191] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:11.184 07:28:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:11.184 07:28:13 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:22:11.184 07:28:13 -- host/digest.sh@77 -- # local rw bs qd 00:22:11.184 07:28:13 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:11.184 07:28:13 -- host/digest.sh@80 -- # rw=randread 00:22:11.184 07:28:13 -- host/digest.sh@80 -- # bs=4096 00:22:11.184 07:28:13 -- host/digest.sh@80 -- # qd=128 00:22:11.184 07:28:13 -- host/digest.sh@82 -- # bperfpid=96965 00:22:11.184 07:28:13 -- host/digest.sh@83 -- # waitforlisten 96965 /var/tmp/bperf.sock 00:22:11.184 07:28:13 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:11.184 07:28:13 -- common/autotest_common.sh@819 -- # '[' -z 96965 ']' 00:22:11.184 07:28:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:11.184 07:28:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:11.184 07:28:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:11.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:11.184 07:28:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:11.184 07:28:13 -- common/autotest_common.sh@10 -- # set +x 00:22:11.442 [2024-11-04 07:28:13.062711] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:22:11.442 [2024-11-04 07:28:13.063013] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96965 ] 00:22:11.442 [2024-11-04 07:28:13.204644] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.701 [2024-11-04 07:28:13.291732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:12.268 07:28:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:12.268 07:28:14 -- common/autotest_common.sh@852 -- # return 0 00:22:12.268 07:28:14 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:12.268 07:28:14 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:12.268 07:28:14 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:12.835 07:28:14 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:12.836 07:28:14 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:13.094 nvme0n1 00:22:13.094 07:28:14 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:13.094 07:28:14 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:13.094 Running I/O for 2 seconds... 00:22:14.997 00:22:14.997 Latency(us) 00:22:14.997 [2024-11-04T07:28:16.838Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:14.997 [2024-11-04T07:28:16.838Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:14.997 nvme0n1 : 2.00 23885.63 93.30 0.00 0.00 5354.44 2323.55 17396.83 00:22:14.997 [2024-11-04T07:28:16.838Z] =================================================================================================================== 00:22:14.997 [2024-11-04T07:28:16.838Z] Total : 23885.63 93.30 0.00 0.00 5354.44 2323.55 17396.83 00:22:14.997 0 00:22:15.346 07:28:16 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:15.346 07:28:16 -- host/digest.sh@92 -- # get_accel_stats 00:22:15.346 07:28:16 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:15.346 07:28:16 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:15.346 | select(.opcode=="crc32c") 00:22:15.346 | "\(.module_name) \(.executed)"' 00:22:15.346 07:28:16 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:15.346 07:28:17 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:15.346 07:28:17 -- host/digest.sh@93 -- # exp_module=software 00:22:15.346 07:28:17 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:15.346 07:28:17 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:15.346 07:28:17 -- host/digest.sh@97 -- # killprocess 96965 00:22:15.346 07:28:17 -- common/autotest_common.sh@926 -- # '[' -z 96965 ']' 00:22:15.346 07:28:17 -- common/autotest_common.sh@930 -- # kill -0 96965 00:22:15.346 07:28:17 -- common/autotest_common.sh@931 -- # uname 00:22:15.346 07:28:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:15.346 07:28:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96965 00:22:15.346 07:28:17 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:15.346 killing process with pid 96965 00:22:15.346 07:28:17 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:15.346 07:28:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96965' 00:22:15.346 Received shutdown signal, test time was about 2.000000 seconds 00:22:15.346 00:22:15.346 Latency(us) 00:22:15.346 [2024-11-04T07:28:17.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.346 [2024-11-04T07:28:17.187Z] =================================================================================================================== 00:22:15.346 [2024-11-04T07:28:17.187Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:15.346 07:28:17 -- common/autotest_common.sh@945 -- # kill 96965 00:22:15.346 07:28:17 -- common/autotest_common.sh@950 -- # wait 96965 00:22:15.605 07:28:17 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:22:15.605 07:28:17 -- host/digest.sh@77 -- # local rw bs qd 00:22:15.605 07:28:17 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:15.605 07:28:17 -- host/digest.sh@80 -- # rw=randread 00:22:15.605 07:28:17 -- host/digest.sh@80 -- # bs=131072 00:22:15.605 07:28:17 -- host/digest.sh@80 -- # qd=16 00:22:15.605 07:28:17 -- host/digest.sh@82 -- # bperfpid=97061 00:22:15.605 07:28:17 -- host/digest.sh@83 -- # waitforlisten 97061 /var/tmp/bperf.sock 00:22:15.605 07:28:17 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:15.605 07:28:17 -- common/autotest_common.sh@819 -- # '[' -z 97061 ']' 00:22:15.605 07:28:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:15.605 07:28:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:15.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:15.605 07:28:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:15.605 07:28:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:15.605 07:28:17 -- common/autotest_common.sh@10 -- # set +x 00:22:15.605 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:15.605 Zero copy mechanism will not be used. 00:22:15.605 [2024-11-04 07:28:17.442340] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:22:15.605 [2024-11-04 07:28:17.442464] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97061 ] 00:22:15.864 [2024-11-04 07:28:17.581015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.864 [2024-11-04 07:28:17.644734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:16.800 07:28:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:16.800 07:28:18 -- common/autotest_common.sh@852 -- # return 0 00:22:16.800 07:28:18 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:16.800 07:28:18 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:16.800 07:28:18 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:17.059 07:28:18 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:17.059 07:28:18 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:17.317 nvme0n1 00:22:17.317 07:28:18 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:17.317 07:28:18 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:17.317 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:17.317 Zero copy mechanism will not be used. 00:22:17.317 Running I/O for 2 seconds... 00:22:19.849 00:22:19.849 Latency(us) 00:22:19.849 [2024-11-04T07:28:21.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:19.849 [2024-11-04T07:28:21.690Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:19.849 nvme0n1 : 2.00 9096.27 1137.03 0.00 0.00 1756.33 767.07 9889.98 00:22:19.849 [2024-11-04T07:28:21.690Z] =================================================================================================================== 00:22:19.849 [2024-11-04T07:28:21.690Z] Total : 9096.27 1137.03 0.00 0.00 1756.33 767.07 9889.98 00:22:19.849 0 00:22:19.849 07:28:21 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:19.849 07:28:21 -- host/digest.sh@92 -- # get_accel_stats 00:22:19.849 07:28:21 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:19.849 07:28:21 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:19.849 | select(.opcode=="crc32c") 00:22:19.849 | "\(.module_name) \(.executed)"' 00:22:19.849 07:28:21 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:19.849 07:28:21 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:19.849 07:28:21 -- host/digest.sh@93 -- # exp_module=software 00:22:19.849 07:28:21 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:19.849 07:28:21 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:19.849 07:28:21 -- host/digest.sh@97 -- # killprocess 97061 00:22:19.849 07:28:21 -- common/autotest_common.sh@926 -- # '[' -z 97061 ']' 00:22:19.849 07:28:21 -- common/autotest_common.sh@930 -- # kill -0 97061 00:22:19.849 07:28:21 -- common/autotest_common.sh@931 -- # uname 00:22:19.849 07:28:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:19.849 07:28:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97061 00:22:19.849 07:28:21 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:19.849 07:28:21 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:19.849 killing process with pid 97061 00:22:19.849 07:28:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97061' 00:22:19.849 Received shutdown signal, test time was about 2.000000 seconds 00:22:19.849 00:22:19.849 Latency(us) 00:22:19.849 [2024-11-04T07:28:21.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:19.849 [2024-11-04T07:28:21.690Z] =================================================================================================================== 00:22:19.849 [2024-11-04T07:28:21.690Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:19.849 07:28:21 -- common/autotest_common.sh@945 -- # kill 97061 00:22:19.849 07:28:21 -- common/autotest_common.sh@950 -- # wait 97061 00:22:19.849 07:28:21 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:22:19.849 07:28:21 -- host/digest.sh@77 -- # local rw bs qd 00:22:19.849 07:28:21 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:19.849 07:28:21 -- host/digest.sh@80 -- # rw=randwrite 00:22:19.849 07:28:21 -- host/digest.sh@80 -- # bs=4096 00:22:19.849 07:28:21 -- host/digest.sh@80 -- # qd=128 00:22:19.849 07:28:21 -- host/digest.sh@82 -- # bperfpid=97146 00:22:19.849 07:28:21 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:19.849 07:28:21 -- host/digest.sh@83 -- # waitforlisten 97146 /var/tmp/bperf.sock 00:22:19.849 07:28:21 -- common/autotest_common.sh@819 -- # '[' -z 97146 ']' 00:22:19.849 07:28:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:19.849 07:28:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:19.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:19.849 07:28:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:19.849 07:28:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:19.849 07:28:21 -- common/autotest_common.sh@10 -- # set +x 00:22:20.108 [2024-11-04 07:28:21.691947] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:22:20.108 [2024-11-04 07:28:21.692048] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97146 ] 00:22:20.108 [2024-11-04 07:28:21.828774] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.108 [2024-11-04 07:28:21.910128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.108 07:28:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:20.108 07:28:21 -- common/autotest_common.sh@852 -- # return 0 00:22:20.108 07:28:21 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:20.108 07:28:21 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:20.108 07:28:21 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:20.676 07:28:22 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:20.676 07:28:22 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:20.934 nvme0n1 00:22:20.934 07:28:22 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:20.934 07:28:22 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:20.934 Running I/O for 2 seconds... 00:22:22.883 00:22:22.883 Latency(us) 00:22:22.883 [2024-11-04T07:28:24.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.883 [2024-11-04T07:28:24.724Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:22.883 nvme0n1 : 2.00 28434.78 111.07 0.00 0.00 4497.65 1869.27 13762.56 00:22:22.883 [2024-11-04T07:28:24.724Z] =================================================================================================================== 00:22:22.883 [2024-11-04T07:28:24.724Z] Total : 28434.78 111.07 0.00 0.00 4497.65 1869.27 13762.56 00:22:22.883 0 00:22:22.883 07:28:24 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:22.883 07:28:24 -- host/digest.sh@92 -- # get_accel_stats 00:22:22.883 07:28:24 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:22.883 07:28:24 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:22.883 07:28:24 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:22.883 | select(.opcode=="crc32c") 00:22:22.883 | "\(.module_name) \(.executed)"' 00:22:23.142 07:28:24 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:23.142 07:28:24 -- host/digest.sh@93 -- # exp_module=software 00:22:23.142 07:28:24 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:23.142 07:28:24 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:23.142 07:28:24 -- host/digest.sh@97 -- # killprocess 97146 00:22:23.142 07:28:24 -- common/autotest_common.sh@926 -- # '[' -z 97146 ']' 00:22:23.142 07:28:24 -- common/autotest_common.sh@930 -- # kill -0 97146 00:22:23.142 07:28:24 -- common/autotest_common.sh@931 -- # uname 00:22:23.142 07:28:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:23.142 07:28:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97146 00:22:23.401 killing process with pid 97146 00:22:23.401 Received shutdown signal, test time was about 2.000000 seconds 00:22:23.401 00:22:23.401 Latency(us) 00:22:23.401 [2024-11-04T07:28:25.242Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.401 [2024-11-04T07:28:25.242Z] =================================================================================================================== 00:22:23.401 [2024-11-04T07:28:25.242Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:23.401 07:28:25 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:23.401 07:28:25 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:23.401 07:28:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97146' 00:22:23.401 07:28:25 -- common/autotest_common.sh@945 -- # kill 97146 00:22:23.401 07:28:25 -- common/autotest_common.sh@950 -- # wait 97146 00:22:23.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:23.659 07:28:25 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:22:23.659 07:28:25 -- host/digest.sh@77 -- # local rw bs qd 00:22:23.659 07:28:25 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:23.659 07:28:25 -- host/digest.sh@80 -- # rw=randwrite 00:22:23.659 07:28:25 -- host/digest.sh@80 -- # bs=131072 00:22:23.659 07:28:25 -- host/digest.sh@80 -- # qd=16 00:22:23.659 07:28:25 -- host/digest.sh@82 -- # bperfpid=97218 00:22:23.659 07:28:25 -- host/digest.sh@83 -- # waitforlisten 97218 /var/tmp/bperf.sock 00:22:23.659 07:28:25 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:23.659 07:28:25 -- common/autotest_common.sh@819 -- # '[' -z 97218 ']' 00:22:23.659 07:28:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:23.659 07:28:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:23.659 07:28:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:23.659 07:28:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:23.659 07:28:25 -- common/autotest_common.sh@10 -- # set +x 00:22:23.659 [2024-11-04 07:28:25.302405] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:22:23.659 [2024-11-04 07:28:25.302693] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97218 ] 00:22:23.660 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:23.660 Zero copy mechanism will not be used. 00:22:23.660 [2024-11-04 07:28:25.435815] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.918 [2024-11-04 07:28:25.500617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:24.485 07:28:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:24.485 07:28:26 -- common/autotest_common.sh@852 -- # return 0 00:22:24.485 07:28:26 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:24.485 07:28:26 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:24.485 07:28:26 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:25.052 07:28:26 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:25.052 07:28:26 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:25.311 nvme0n1 00:22:25.311 07:28:26 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:25.311 07:28:26 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:25.311 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:25.311 Zero copy mechanism will not be used. 00:22:25.311 Running I/O for 2 seconds... 00:22:27.214 00:22:27.214 Latency(us) 00:22:27.214 [2024-11-04T07:28:29.055Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.214 [2024-11-04T07:28:29.055Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:27.214 nvme0n1 : 2.00 7893.76 986.72 0.00 0.00 2022.73 1578.82 10843.23 00:22:27.214 [2024-11-04T07:28:29.055Z] =================================================================================================================== 00:22:27.214 [2024-11-04T07:28:29.055Z] Total : 7893.76 986.72 0.00 0.00 2022.73 1578.82 10843.23 00:22:27.214 0 00:22:27.474 07:28:29 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:27.474 07:28:29 -- host/digest.sh@92 -- # get_accel_stats 00:22:27.474 07:28:29 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:27.474 07:28:29 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:27.474 07:28:29 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:27.474 | select(.opcode=="crc32c") 00:22:27.474 | "\(.module_name) \(.executed)"' 00:22:27.733 07:28:29 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:27.733 07:28:29 -- host/digest.sh@93 -- # exp_module=software 00:22:27.733 07:28:29 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:27.733 07:28:29 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:27.733 07:28:29 -- host/digest.sh@97 -- # killprocess 97218 00:22:27.733 07:28:29 -- common/autotest_common.sh@926 -- # '[' -z 97218 ']' 00:22:27.733 07:28:29 -- common/autotest_common.sh@930 -- # kill -0 97218 00:22:27.733 07:28:29 -- common/autotest_common.sh@931 -- # uname 00:22:27.733 07:28:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:27.733 07:28:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97218 00:22:27.733 killing process with pid 97218 00:22:27.733 Received shutdown signal, test time was about 2.000000 seconds 00:22:27.733 00:22:27.733 Latency(us) 00:22:27.733 [2024-11-04T07:28:29.574Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.733 [2024-11-04T07:28:29.574Z] =================================================================================================================== 00:22:27.733 [2024-11-04T07:28:29.574Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:27.733 07:28:29 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:27.733 07:28:29 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:27.733 07:28:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97218' 00:22:27.733 07:28:29 -- common/autotest_common.sh@945 -- # kill 97218 00:22:27.733 07:28:29 -- common/autotest_common.sh@950 -- # wait 97218 00:22:27.991 07:28:29 -- host/digest.sh@126 -- # killprocess 96915 00:22:27.991 07:28:29 -- common/autotest_common.sh@926 -- # '[' -z 96915 ']' 00:22:27.991 07:28:29 -- common/autotest_common.sh@930 -- # kill -0 96915 00:22:27.991 07:28:29 -- common/autotest_common.sh@931 -- # uname 00:22:27.991 07:28:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:27.991 07:28:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96915 00:22:27.991 killing process with pid 96915 00:22:27.991 07:28:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:27.991 07:28:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:27.992 07:28:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96915' 00:22:27.992 07:28:29 -- common/autotest_common.sh@945 -- # kill 96915 00:22:27.992 07:28:29 -- common/autotest_common.sh@950 -- # wait 96915 00:22:27.992 ************************************ 00:22:27.992 END TEST nvmf_digest_clean 00:22:27.992 ************************************ 00:22:27.992 00:22:27.992 real 0m18.034s 00:22:27.992 user 0m32.861s 00:22:27.992 sys 0m5.482s 00:22:27.992 07:28:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:27.992 07:28:29 -- common/autotest_common.sh@10 -- # set +x 00:22:28.250 07:28:29 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:22:28.250 07:28:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:28.250 07:28:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:28.250 07:28:29 -- common/autotest_common.sh@10 -- # set +x 00:22:28.250 ************************************ 00:22:28.250 START TEST nvmf_digest_error 00:22:28.250 ************************************ 00:22:28.250 07:28:29 -- common/autotest_common.sh@1104 -- # run_digest_error 00:22:28.250 07:28:29 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:22:28.250 07:28:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:28.250 07:28:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:28.250 07:28:29 -- common/autotest_common.sh@10 -- # set +x 00:22:28.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.250 07:28:29 -- nvmf/common.sh@469 -- # nvmfpid=97337 00:22:28.250 07:28:29 -- nvmf/common.sh@470 -- # waitforlisten 97337 00:22:28.250 07:28:29 -- common/autotest_common.sh@819 -- # '[' -z 97337 ']' 00:22:28.250 07:28:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.250 07:28:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:28.250 07:28:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:28.250 07:28:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.250 07:28:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:28.250 07:28:29 -- common/autotest_common.sh@10 -- # set +x 00:22:28.250 [2024-11-04 07:28:29.942461] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:22:28.250 [2024-11-04 07:28:29.942569] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:28.509 [2024-11-04 07:28:30.090826] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.509 [2024-11-04 07:28:30.167881] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:28.509 [2024-11-04 07:28:30.168048] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:28.509 [2024-11-04 07:28:30.168062] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:28.509 [2024-11-04 07:28:30.168070] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:28.509 [2024-11-04 07:28:30.168112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.446 07:28:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:29.446 07:28:30 -- common/autotest_common.sh@852 -- # return 0 00:22:29.446 07:28:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:29.446 07:28:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:29.446 07:28:30 -- common/autotest_common.sh@10 -- # set +x 00:22:29.446 07:28:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:29.446 07:28:30 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:22:29.446 07:28:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.446 07:28:30 -- common/autotest_common.sh@10 -- # set +x 00:22:29.446 [2024-11-04 07:28:30.976602] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:22:29.446 07:28:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.446 07:28:30 -- host/digest.sh@104 -- # common_target_config 00:22:29.446 07:28:30 -- host/digest.sh@43 -- # rpc_cmd 00:22:29.446 07:28:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.446 07:28:30 -- common/autotest_common.sh@10 -- # set +x 00:22:29.446 null0 00:22:29.446 [2024-11-04 07:28:31.080848] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:29.446 [2024-11-04 07:28:31.104994] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:29.446 07:28:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.446 07:28:31 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:22:29.446 07:28:31 -- host/digest.sh@54 -- # local rw bs qd 00:22:29.446 07:28:31 -- host/digest.sh@56 -- # rw=randread 00:22:29.446 07:28:31 -- host/digest.sh@56 -- # bs=4096 00:22:29.446 07:28:31 -- host/digest.sh@56 -- # qd=128 00:22:29.446 07:28:31 -- host/digest.sh@58 -- # bperfpid=97381 00:22:29.446 07:28:31 -- host/digest.sh@60 -- # waitforlisten 97381 /var/tmp/bperf.sock 00:22:29.446 07:28:31 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:22:29.446 07:28:31 -- common/autotest_common.sh@819 -- # '[' -z 97381 ']' 00:22:29.446 07:28:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:29.446 07:28:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:29.446 07:28:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:29.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:29.446 07:28:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:29.446 07:28:31 -- common/autotest_common.sh@10 -- # set +x 00:22:29.446 [2024-11-04 07:28:31.168846] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:22:29.446 [2024-11-04 07:28:31.169252] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97381 ] 00:22:29.705 [2024-11-04 07:28:31.310974] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.705 [2024-11-04 07:28:31.389458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.642 07:28:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:30.642 07:28:32 -- common/autotest_common.sh@852 -- # return 0 00:22:30.642 07:28:32 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:30.642 07:28:32 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:30.642 07:28:32 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:30.642 07:28:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:30.642 07:28:32 -- common/autotest_common.sh@10 -- # set +x 00:22:30.642 07:28:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:30.642 07:28:32 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:30.642 07:28:32 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:30.901 nvme0n1 00:22:30.901 07:28:32 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:30.901 07:28:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:30.901 07:28:32 -- common/autotest_common.sh@10 -- # set +x 00:22:30.901 07:28:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:30.901 07:28:32 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:30.901 07:28:32 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:31.160 Running I/O for 2 seconds... 00:22:31.160 [2024-11-04 07:28:32.797379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.160 [2024-11-04 07:28:32.797434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:25247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.160 [2024-11-04 07:28:32.797450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.160 [2024-11-04 07:28:32.806680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.160 [2024-11-04 07:28:32.806714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.160 [2024-11-04 07:28:32.806726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.160 [2024-11-04 07:28:32.819489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.160 [2024-11-04 07:28:32.819522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.160 [2024-11-04 07:28:32.819533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.160 [2024-11-04 07:28:32.828693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.160 [2024-11-04 07:28:32.828724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.160 [2024-11-04 07:28:32.828735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.160 [2024-11-04 07:28:32.838378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.160 [2024-11-04 07:28:32.838409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.160 [2024-11-04 07:28:32.838420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.160 [2024-11-04 07:28:32.849792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.160 [2024-11-04 07:28:32.849823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.160 [2024-11-04 07:28:32.849835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.160 [2024-11-04 07:28:32.861953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.160 [2024-11-04 07:28:32.861985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.160 [2024-11-04 07:28:32.861996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.160 [2024-11-04 07:28:32.871364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.160 [2024-11-04 07:28:32.871396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.160 [2024-11-04 07:28:32.871407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.160 [2024-11-04 07:28:32.880665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.160 [2024-11-04 07:28:32.880696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.160 [2024-11-04 07:28:32.880707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.160 [2024-11-04 07:28:32.890115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.161 [2024-11-04 07:28:32.890146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.161 [2024-11-04 07:28:32.890158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.161 [2024-11-04 07:28:32.900041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.161 [2024-11-04 07:28:32.900072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.161 [2024-11-04 07:28:32.900083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.161 [2024-11-04 07:28:32.909735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.161 [2024-11-04 07:28:32.909768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.161 [2024-11-04 07:28:32.909779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.161 [2024-11-04 07:28:32.920326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.161 [2024-11-04 07:28:32.920357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.161 [2024-11-04 07:28:32.920369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.161 [2024-11-04 07:28:32.932206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.161 [2024-11-04 07:28:32.932236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.161 [2024-11-04 07:28:32.932248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.161 [2024-11-04 07:28:32.940223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.161 [2024-11-04 07:28:32.940254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.161 [2024-11-04 07:28:32.940266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.161 [2024-11-04 07:28:32.952895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.161 [2024-11-04 07:28:32.952924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.161 [2024-11-04 07:28:32.952935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.161 [2024-11-04 07:28:32.965075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.161 [2024-11-04 07:28:32.965106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.161 [2024-11-04 07:28:32.965118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.161 [2024-11-04 07:28:32.973609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.161 [2024-11-04 07:28:32.973640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.161 [2024-11-04 07:28:32.973651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.161 [2024-11-04 07:28:32.985994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.161 [2024-11-04 07:28:32.986024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.161 [2024-11-04 07:28:32.986035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.161 [2024-11-04 07:28:32.998602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.161 [2024-11-04 07:28:32.998642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.161 [2024-11-04 07:28:32.998653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.420 [2024-11-04 07:28:33.009800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.420 [2024-11-04 07:28:33.009831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.420 [2024-11-04 07:28:33.009842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.420 [2024-11-04 07:28:33.018757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.420 [2024-11-04 07:28:33.018789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.420 [2024-11-04 07:28:33.018800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.420 [2024-11-04 07:28:33.030369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.420 [2024-11-04 07:28:33.030401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.420 [2024-11-04 07:28:33.030412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.420 [2024-11-04 07:28:33.041461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.420 [2024-11-04 07:28:33.041493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.420 [2024-11-04 07:28:33.041505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.420 [2024-11-04 07:28:33.050196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.420 [2024-11-04 07:28:33.050227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.420 [2024-11-04 07:28:33.050238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.420 [2024-11-04 07:28:33.061479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.420 [2024-11-04 07:28:33.061510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.420 [2024-11-04 07:28:33.061522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.420 [2024-11-04 07:28:33.074186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.420 [2024-11-04 07:28:33.074218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.420 [2024-11-04 07:28:33.074230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.420 [2024-11-04 07:28:33.082976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.420 [2024-11-04 07:28:33.083006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.420 [2024-11-04 07:28:33.083017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.420 [2024-11-04 07:28:33.092377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.420 [2024-11-04 07:28:33.092408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.420 [2024-11-04 07:28:33.092419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.420 [2024-11-04 07:28:33.101595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.420 [2024-11-04 07:28:33.101626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.420 [2024-11-04 07:28:33.101637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.420 [2024-11-04 07:28:33.111393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.420 [2024-11-04 07:28:33.111425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.420 [2024-11-04 07:28:33.111436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.420 [2024-11-04 07:28:33.121247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.421 [2024-11-04 07:28:33.121278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.421 [2024-11-04 07:28:33.121289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.421 [2024-11-04 07:28:33.132161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.421 [2024-11-04 07:28:33.132192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.421 [2024-11-04 07:28:33.132203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.421 [2024-11-04 07:28:33.141535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.421 [2024-11-04 07:28:33.141566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.421 [2024-11-04 07:28:33.141578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.421 [2024-11-04 07:28:33.151930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.421 [2024-11-04 07:28:33.151960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.421 [2024-11-04 07:28:33.151972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.421 [2024-11-04 07:28:33.162337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.421 [2024-11-04 07:28:33.162368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.421 [2024-11-04 07:28:33.162379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.421 [2024-11-04 07:28:33.171824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.421 [2024-11-04 07:28:33.171855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.421 [2024-11-04 07:28:33.171866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.421 [2024-11-04 07:28:33.181028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.421 [2024-11-04 07:28:33.181059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.421 [2024-11-04 07:28:33.181070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.421 [2024-11-04 07:28:33.190700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.421 [2024-11-04 07:28:33.190731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.421 [2024-11-04 07:28:33.190742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.421 [2024-11-04 07:28:33.202173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.421 [2024-11-04 07:28:33.202204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.421 [2024-11-04 07:28:33.202215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.421 [2024-11-04 07:28:33.211921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.421 [2024-11-04 07:28:33.211951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.421 [2024-11-04 07:28:33.211962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.421 [2024-11-04 07:28:33.221402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.421 [2024-11-04 07:28:33.221435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.421 [2024-11-04 07:28:33.221446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.421 [2024-11-04 07:28:33.232413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.421 [2024-11-04 07:28:33.232445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.421 [2024-11-04 07:28:33.232456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.421 [2024-11-04 07:28:33.241515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.421 [2024-11-04 07:28:33.241546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.421 [2024-11-04 07:28:33.241556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.421 [2024-11-04 07:28:33.251110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.421 [2024-11-04 07:28:33.251142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.421 [2024-11-04 07:28:33.251153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.680 [2024-11-04 07:28:33.261781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.680 [2024-11-04 07:28:33.261812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.680 [2024-11-04 07:28:33.261823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.680 [2024-11-04 07:28:33.272251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.680 [2024-11-04 07:28:33.272283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.680 [2024-11-04 07:28:33.272294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.680 [2024-11-04 07:28:33.282184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.680 [2024-11-04 07:28:33.282215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.680 [2024-11-04 07:28:33.282227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.680 [2024-11-04 07:28:33.293200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.680 [2024-11-04 07:28:33.293231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.680 [2024-11-04 07:28:33.293243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.680 [2024-11-04 07:28:33.301604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.680 [2024-11-04 07:28:33.301635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.680 [2024-11-04 07:28:33.301646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.680 [2024-11-04 07:28:33.312248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.680 [2024-11-04 07:28:33.312279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.681 [2024-11-04 07:28:33.312290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.681 [2024-11-04 07:28:33.321417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.681 [2024-11-04 07:28:33.321447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.681 [2024-11-04 07:28:33.321458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.681 [2024-11-04 07:28:33.331221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.681 [2024-11-04 07:28:33.331252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.681 [2024-11-04 07:28:33.331264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.681 [2024-11-04 07:28:33.339908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.681 [2024-11-04 07:28:33.339937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.681 [2024-11-04 07:28:33.339948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.681 [2024-11-04 07:28:33.350023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.681 [2024-11-04 07:28:33.350054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.681 [2024-11-04 07:28:33.350065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.681 [2024-11-04 07:28:33.358910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.681 [2024-11-04 07:28:33.358953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.681 [2024-11-04 07:28:33.358974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.681 [2024-11-04 07:28:33.369152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.681 [2024-11-04 07:28:33.369182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.681 [2024-11-04 07:28:33.369193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.681 [2024-11-04 07:28:33.379615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.681 [2024-11-04 07:28:33.379646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.681 [2024-11-04 07:28:33.379658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.681 [2024-11-04 07:28:33.390030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.681 [2024-11-04 07:28:33.390060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.681 [2024-11-04 07:28:33.390070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.681 [2024-11-04 07:28:33.400145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.681 [2024-11-04 07:28:33.400176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.681 [2024-11-04 07:28:33.400188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.681 [2024-11-04 07:28:33.409772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.681 [2024-11-04 07:28:33.409803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.681 [2024-11-04 07:28:33.409814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.681 [2024-11-04 07:28:33.421200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.681 [2024-11-04 07:28:33.421231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.681 [2024-11-04 07:28:33.421242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.681 [2024-11-04 07:28:33.429915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.681 [2024-11-04 07:28:33.429945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.681 [2024-11-04 07:28:33.429956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.681 [2024-11-04 07:28:33.440726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.681 [2024-11-04 07:28:33.440757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.681 [2024-11-04 07:28:33.440768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.681 [2024-11-04 07:28:33.449143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.681 [2024-11-04 07:28:33.449174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.681 [2024-11-04 07:28:33.449184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.681 [2024-11-04 07:28:33.460791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.681 [2024-11-04 07:28:33.460822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.681 [2024-11-04 07:28:33.460833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.681 [2024-11-04 07:28:33.472640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.681 [2024-11-04 07:28:33.472671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.681 [2024-11-04 07:28:33.472682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.681 [2024-11-04 07:28:33.483499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.681 [2024-11-04 07:28:33.483529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.681 [2024-11-04 07:28:33.483540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.681 [2024-11-04 07:28:33.495987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.681 [2024-11-04 07:28:33.496018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.681 [2024-11-04 07:28:33.496028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.681 [2024-11-04 07:28:33.505801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.681 [2024-11-04 07:28:33.505832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.681 [2024-11-04 07:28:33.505842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.681 [2024-11-04 07:28:33.514659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.681 [2024-11-04 07:28:33.514690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.681 [2024-11-04 07:28:33.514701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.941 [2024-11-04 07:28:33.528786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.941 [2024-11-04 07:28:33.528818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.941 [2024-11-04 07:28:33.528830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.941 [2024-11-04 07:28:33.537334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.941 [2024-11-04 07:28:33.537365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.941 [2024-11-04 07:28:33.537375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.941 [2024-11-04 07:28:33.549091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.941 [2024-11-04 07:28:33.549121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.941 [2024-11-04 07:28:33.549133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.941 [2024-11-04 07:28:33.562598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.941 [2024-11-04 07:28:33.562631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.941 [2024-11-04 07:28:33.562642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.941 [2024-11-04 07:28:33.575328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.941 [2024-11-04 07:28:33.575360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.941 [2024-11-04 07:28:33.575371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.941 [2024-11-04 07:28:33.587606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.941 [2024-11-04 07:28:33.587637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.941 [2024-11-04 07:28:33.587648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.941 [2024-11-04 07:28:33.600991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.941 [2024-11-04 07:28:33.601021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.941 [2024-11-04 07:28:33.601032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.941 [2024-11-04 07:28:33.613795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.941 [2024-11-04 07:28:33.613838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.941 [2024-11-04 07:28:33.613849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.941 [2024-11-04 07:28:33.621793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.941 [2024-11-04 07:28:33.621824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.941 [2024-11-04 07:28:33.621835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.941 [2024-11-04 07:28:33.634217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.941 [2024-11-04 07:28:33.634249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.941 [2024-11-04 07:28:33.634261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.941 [2024-11-04 07:28:33.646324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.941 [2024-11-04 07:28:33.646356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.941 [2024-11-04 07:28:33.646367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.941 [2024-11-04 07:28:33.659544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.941 [2024-11-04 07:28:33.659576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.941 [2024-11-04 07:28:33.659588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.941 [2024-11-04 07:28:33.671641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.941 [2024-11-04 07:28:33.671672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.941 [2024-11-04 07:28:33.671684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.941 [2024-11-04 07:28:33.683055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.941 [2024-11-04 07:28:33.683086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.941 [2024-11-04 07:28:33.683097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.941 [2024-11-04 07:28:33.693928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.941 [2024-11-04 07:28:33.693957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.941 [2024-11-04 07:28:33.693968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.941 [2024-11-04 07:28:33.702196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.941 [2024-11-04 07:28:33.702226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.941 [2024-11-04 07:28:33.702237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.941 [2024-11-04 07:28:33.714349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.941 [2024-11-04 07:28:33.714380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.941 [2024-11-04 07:28:33.714391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.941 [2024-11-04 07:28:33.726171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.941 [2024-11-04 07:28:33.726202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.941 [2024-11-04 07:28:33.726213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.941 [2024-11-04 07:28:33.736835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.941 [2024-11-04 07:28:33.736867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.941 [2024-11-04 07:28:33.736893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.941 [2024-11-04 07:28:33.748164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.941 [2024-11-04 07:28:33.748195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.941 [2024-11-04 07:28:33.748206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.941 [2024-11-04 07:28:33.761356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.941 [2024-11-04 07:28:33.761399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.941 [2024-11-04 07:28:33.761410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.941 [2024-11-04 07:28:33.774835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:31.941 [2024-11-04 07:28:33.774868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.942 [2024-11-04 07:28:33.774890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.201 [2024-11-04 07:28:33.786813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.201 [2024-11-04 07:28:33.786845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.201 [2024-11-04 07:28:33.786856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.201 [2024-11-04 07:28:33.798115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.201 [2024-11-04 07:28:33.798157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.201 [2024-11-04 07:28:33.798169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.201 [2024-11-04 07:28:33.807544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.201 [2024-11-04 07:28:33.807587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.201 [2024-11-04 07:28:33.807598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.201 [2024-11-04 07:28:33.818546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.201 [2024-11-04 07:28:33.818585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.201 [2024-11-04 07:28:33.818604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.201 [2024-11-04 07:28:33.831263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.201 [2024-11-04 07:28:33.831306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.201 [2024-11-04 07:28:33.831317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.201 [2024-11-04 07:28:33.840365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.201 [2024-11-04 07:28:33.840396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.201 [2024-11-04 07:28:33.840407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.201 [2024-11-04 07:28:33.853237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.201 [2024-11-04 07:28:33.853280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.201 [2024-11-04 07:28:33.853291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.201 [2024-11-04 07:28:33.864989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.201 [2024-11-04 07:28:33.865032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.201 [2024-11-04 07:28:33.865044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.201 [2024-11-04 07:28:33.876331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.201 [2024-11-04 07:28:33.876361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.201 [2024-11-04 07:28:33.876372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.201 [2024-11-04 07:28:33.886372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.201 [2024-11-04 07:28:33.886414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.201 [2024-11-04 07:28:33.886426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.201 [2024-11-04 07:28:33.898830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.201 [2024-11-04 07:28:33.898861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.201 [2024-11-04 07:28:33.898883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.201 [2024-11-04 07:28:33.907571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.201 [2024-11-04 07:28:33.907602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.201 [2024-11-04 07:28:33.907613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.201 [2024-11-04 07:28:33.920783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.201 [2024-11-04 07:28:33.920814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.201 [2024-11-04 07:28:33.920825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.201 [2024-11-04 07:28:33.934084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.201 [2024-11-04 07:28:33.934127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.201 [2024-11-04 07:28:33.934138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.201 [2024-11-04 07:28:33.944816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.202 [2024-11-04 07:28:33.944847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.202 [2024-11-04 07:28:33.944857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.202 [2024-11-04 07:28:33.953558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.202 [2024-11-04 07:28:33.953589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.202 [2024-11-04 07:28:33.953600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.202 [2024-11-04 07:28:33.965863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.202 [2024-11-04 07:28:33.965902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.202 [2024-11-04 07:28:33.965914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.202 [2024-11-04 07:28:33.975056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.202 [2024-11-04 07:28:33.975087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.202 [2024-11-04 07:28:33.975097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.202 [2024-11-04 07:28:33.987917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.202 [2024-11-04 07:28:33.987947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.202 [2024-11-04 07:28:33.987958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.202 [2024-11-04 07:28:33.999951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.202 [2024-11-04 07:28:33.999981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.202 [2024-11-04 07:28:33.999992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.202 [2024-11-04 07:28:34.011603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.202 [2024-11-04 07:28:34.011634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.202 [2024-11-04 07:28:34.011646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.202 [2024-11-04 07:28:34.020552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.202 [2024-11-04 07:28:34.020584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.202 [2024-11-04 07:28:34.020595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.202 [2024-11-04 07:28:34.030396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.202 [2024-11-04 07:28:34.030427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.202 [2024-11-04 07:28:34.030438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.202 [2024-11-04 07:28:34.039941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.202 [2024-11-04 07:28:34.039970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.202 [2024-11-04 07:28:34.039981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.461 [2024-11-04 07:28:34.051919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.461 [2024-11-04 07:28:34.051948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.461 [2024-11-04 07:28:34.051959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.461 [2024-11-04 07:28:34.063840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.461 [2024-11-04 07:28:34.063870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.461 [2024-11-04 07:28:34.063907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.461 [2024-11-04 07:28:34.072915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.461 [2024-11-04 07:28:34.072945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.461 [2024-11-04 07:28:34.072957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.461 [2024-11-04 07:28:34.082619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.461 [2024-11-04 07:28:34.082651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.461 [2024-11-04 07:28:34.082662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.461 [2024-11-04 07:28:34.090891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.461 [2024-11-04 07:28:34.090920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.461 [2024-11-04 07:28:34.090932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.461 [2024-11-04 07:28:34.103046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.461 [2024-11-04 07:28:34.103078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.461 [2024-11-04 07:28:34.103090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.461 [2024-11-04 07:28:34.112459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.461 [2024-11-04 07:28:34.112490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.461 [2024-11-04 07:28:34.112500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.461 [2024-11-04 07:28:34.122051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.461 [2024-11-04 07:28:34.122082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.461 [2024-11-04 07:28:34.122093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.461 [2024-11-04 07:28:34.131652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.461 [2024-11-04 07:28:34.131683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.461 [2024-11-04 07:28:34.131693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.461 [2024-11-04 07:28:34.141181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.461 [2024-11-04 07:28:34.141211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.461 [2024-11-04 07:28:34.141222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.461 [2024-11-04 07:28:34.149979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.461 [2024-11-04 07:28:34.150013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.461 [2024-11-04 07:28:34.150024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.461 [2024-11-04 07:28:34.159681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.461 [2024-11-04 07:28:34.159713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.461 [2024-11-04 07:28:34.159725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.462 [2024-11-04 07:28:34.169999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.462 [2024-11-04 07:28:34.170031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.462 [2024-11-04 07:28:34.170043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.462 [2024-11-04 07:28:34.181686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.462 [2024-11-04 07:28:34.181717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.462 [2024-11-04 07:28:34.181729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.462 [2024-11-04 07:28:34.191246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.462 [2024-11-04 07:28:34.191278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.462 [2024-11-04 07:28:34.191289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.462 [2024-11-04 07:28:34.200858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.462 [2024-11-04 07:28:34.200898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.462 [2024-11-04 07:28:34.200910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.462 [2024-11-04 07:28:34.211308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.462 [2024-11-04 07:28:34.211339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.462 [2024-11-04 07:28:34.211350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.462 [2024-11-04 07:28:34.222227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.462 [2024-11-04 07:28:34.222258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.462 [2024-11-04 07:28:34.222270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.462 [2024-11-04 07:28:34.231954] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.462 [2024-11-04 07:28:34.231985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.462 [2024-11-04 07:28:34.231996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.462 [2024-11-04 07:28:34.244372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.462 [2024-11-04 07:28:34.244403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.462 [2024-11-04 07:28:34.244414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.462 [2024-11-04 07:28:34.252907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.462 [2024-11-04 07:28:34.252937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.462 [2024-11-04 07:28:34.252948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.462 [2024-11-04 07:28:34.265001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.462 [2024-11-04 07:28:34.265031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.462 [2024-11-04 07:28:34.265042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.462 [2024-11-04 07:28:34.277469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.462 [2024-11-04 07:28:34.277500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.462 [2024-11-04 07:28:34.277511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.462 [2024-11-04 07:28:34.290133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.462 [2024-11-04 07:28:34.290163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.462 [2024-11-04 07:28:34.290174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.721 [2024-11-04 07:28:34.303406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.721 [2024-11-04 07:28:34.303436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.721 [2024-11-04 07:28:34.303448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.721 [2024-11-04 07:28:34.313904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.721 [2024-11-04 07:28:34.313934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.721 [2024-11-04 07:28:34.313945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.721 [2024-11-04 07:28:34.323931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.721 [2024-11-04 07:28:34.323961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.721 [2024-11-04 07:28:34.323972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.721 [2024-11-04 07:28:34.333325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.721 [2024-11-04 07:28:34.333356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.721 [2024-11-04 07:28:34.333367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.721 [2024-11-04 07:28:34.344174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.721 [2024-11-04 07:28:34.344205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.721 [2024-11-04 07:28:34.344217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.721 [2024-11-04 07:28:34.354403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.721 [2024-11-04 07:28:34.354434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.721 [2024-11-04 07:28:34.354445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.721 [2024-11-04 07:28:34.363934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.721 [2024-11-04 07:28:34.363964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.721 [2024-11-04 07:28:34.363974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.721 [2024-11-04 07:28:34.374767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.721 [2024-11-04 07:28:34.374799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.721 [2024-11-04 07:28:34.374810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.721 [2024-11-04 07:28:34.386392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.721 [2024-11-04 07:28:34.386423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.721 [2024-11-04 07:28:34.386434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.721 [2024-11-04 07:28:34.395489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.722 [2024-11-04 07:28:34.395532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.722 [2024-11-04 07:28:34.395544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.722 [2024-11-04 07:28:34.407681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.722 [2024-11-04 07:28:34.407712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.722 [2024-11-04 07:28:34.407723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.722 [2024-11-04 07:28:34.419949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.722 [2024-11-04 07:28:34.419980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.722 [2024-11-04 07:28:34.419991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.722 [2024-11-04 07:28:34.432726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.722 [2024-11-04 07:28:34.432757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.722 [2024-11-04 07:28:34.432768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.722 [2024-11-04 07:28:34.444777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.722 [2024-11-04 07:28:34.444808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.722 [2024-11-04 07:28:34.444819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.722 [2024-11-04 07:28:34.456082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.722 [2024-11-04 07:28:34.456112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.722 [2024-11-04 07:28:34.456123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.722 [2024-11-04 07:28:34.465288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.722 [2024-11-04 07:28:34.465320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.722 [2024-11-04 07:28:34.465331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.722 [2024-11-04 07:28:34.478102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.722 [2024-11-04 07:28:34.478133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.722 [2024-11-04 07:28:34.478144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.722 [2024-11-04 07:28:34.490000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.722 [2024-11-04 07:28:34.490030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.722 [2024-11-04 07:28:34.490041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.722 [2024-11-04 07:28:34.502831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.722 [2024-11-04 07:28:34.502863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.722 [2024-11-04 07:28:34.502897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.722 [2024-11-04 07:28:34.514231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.722 [2024-11-04 07:28:34.514262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.722 [2024-11-04 07:28:34.514272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.722 [2024-11-04 07:28:34.523500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.722 [2024-11-04 07:28:34.523530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.722 [2024-11-04 07:28:34.523542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.722 [2024-11-04 07:28:34.536267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.722 [2024-11-04 07:28:34.536298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.722 [2024-11-04 07:28:34.536309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.722 [2024-11-04 07:28:34.548808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.722 [2024-11-04 07:28:34.548839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.722 [2024-11-04 07:28:34.548849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.722 [2024-11-04 07:28:34.559954] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.722 [2024-11-04 07:28:34.559984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.722 [2024-11-04 07:28:34.559995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.981 [2024-11-04 07:28:34.568964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.981 [2024-11-04 07:28:34.568995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.981 [2024-11-04 07:28:34.569006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.981 [2024-11-04 07:28:34.580420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.981 [2024-11-04 07:28:34.580462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.981 [2024-11-04 07:28:34.580474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.981 [2024-11-04 07:28:34.592040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.981 [2024-11-04 07:28:34.592083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.981 [2024-11-04 07:28:34.592094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.981 [2024-11-04 07:28:34.603226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.981 [2024-11-04 07:28:34.603256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.981 [2024-11-04 07:28:34.603278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.981 [2024-11-04 07:28:34.612779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.981 [2024-11-04 07:28:34.612810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.981 [2024-11-04 07:28:34.612822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.981 [2024-11-04 07:28:34.622130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.981 [2024-11-04 07:28:34.622172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.981 [2024-11-04 07:28:34.622184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.981 [2024-11-04 07:28:34.631259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.981 [2024-11-04 07:28:34.631289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.981 [2024-11-04 07:28:34.631300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.981 [2024-11-04 07:28:34.641919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.981 [2024-11-04 07:28:34.641949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.981 [2024-11-04 07:28:34.641961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.981 [2024-11-04 07:28:34.650476] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.981 [2024-11-04 07:28:34.650506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.981 [2024-11-04 07:28:34.650517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.981 [2024-11-04 07:28:34.660554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.981 [2024-11-04 07:28:34.660585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.981 [2024-11-04 07:28:34.660596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.981 [2024-11-04 07:28:34.671722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.981 [2024-11-04 07:28:34.671752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.981 [2024-11-04 07:28:34.671763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.982 [2024-11-04 07:28:34.680241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.982 [2024-11-04 07:28:34.680273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:54 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.982 [2024-11-04 07:28:34.680284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.982 [2024-11-04 07:28:34.689770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.982 [2024-11-04 07:28:34.689801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.982 [2024-11-04 07:28:34.689816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.982 [2024-11-04 07:28:34.699623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.982 [2024-11-04 07:28:34.699655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.982 [2024-11-04 07:28:34.699666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.982 [2024-11-04 07:28:34.709864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.982 [2024-11-04 07:28:34.709904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.982 [2024-11-04 07:28:34.709916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.982 [2024-11-04 07:28:34.720211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.982 [2024-11-04 07:28:34.720243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.982 [2024-11-04 07:28:34.720254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.982 [2024-11-04 07:28:34.729894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.982 [2024-11-04 07:28:34.729924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.982 [2024-11-04 07:28:34.729935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.982 [2024-11-04 07:28:34.739947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.982 [2024-11-04 07:28:34.739975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.982 [2024-11-04 07:28:34.739987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.982 [2024-11-04 07:28:34.750884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.982 [2024-11-04 07:28:34.750928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.982 [2024-11-04 07:28:34.750939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.982 [2024-11-04 07:28:34.763421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.982 [2024-11-04 07:28:34.763453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.982 [2024-11-04 07:28:34.763465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.982 [2024-11-04 07:28:34.774135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xda18d0) 00:22:32.982 [2024-11-04 07:28:34.774168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.982 [2024-11-04 07:28:34.774179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.982 00:22:32.982 Latency(us) 00:22:32.982 [2024-11-04T07:28:34.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.982 [2024-11-04T07:28:34.823Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:32.982 nvme0n1 : 2.00 23768.58 92.85 0.00 0.00 5379.49 2442.71 16562.73 00:22:32.982 [2024-11-04T07:28:34.823Z] =================================================================================================================== 00:22:32.982 [2024-11-04T07:28:34.823Z] Total : 23768.58 92.85 0.00 0.00 5379.49 2442.71 16562.73 00:22:32.982 0 00:22:32.982 07:28:34 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:32.982 07:28:34 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:32.982 07:28:34 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:32.982 07:28:34 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:32.982 | .driver_specific 00:22:32.982 | .nvme_error 00:22:32.982 | .status_code 00:22:32.982 | .command_transient_transport_error' 00:22:33.241 07:28:35 -- host/digest.sh@71 -- # (( 186 > 0 )) 00:22:33.241 07:28:35 -- host/digest.sh@73 -- # killprocess 97381 00:22:33.241 07:28:35 -- common/autotest_common.sh@926 -- # '[' -z 97381 ']' 00:22:33.241 07:28:35 -- common/autotest_common.sh@930 -- # kill -0 97381 00:22:33.241 07:28:35 -- common/autotest_common.sh@931 -- # uname 00:22:33.241 07:28:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:33.241 07:28:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97381 00:22:33.513 07:28:35 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:33.513 07:28:35 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:33.513 killing process with pid 97381 00:22:33.513 07:28:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97381' 00:22:33.513 Received shutdown signal, test time was about 2.000000 seconds 00:22:33.513 00:22:33.513 Latency(us) 00:22:33.513 [2024-11-04T07:28:35.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.513 [2024-11-04T07:28:35.354Z] =================================================================================================================== 00:22:33.513 [2024-11-04T07:28:35.355Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:33.514 07:28:35 -- common/autotest_common.sh@945 -- # kill 97381 00:22:33.514 07:28:35 -- common/autotest_common.sh@950 -- # wait 97381 00:22:33.514 07:28:35 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:22:33.514 07:28:35 -- host/digest.sh@54 -- # local rw bs qd 00:22:33.514 07:28:35 -- host/digest.sh@56 -- # rw=randread 00:22:33.514 07:28:35 -- host/digest.sh@56 -- # bs=131072 00:22:33.514 07:28:35 -- host/digest.sh@56 -- # qd=16 00:22:33.514 07:28:35 -- host/digest.sh@58 -- # bperfpid=97471 00:22:33.514 07:28:35 -- host/digest.sh@60 -- # waitforlisten 97471 /var/tmp/bperf.sock 00:22:33.514 07:28:35 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:22:33.514 07:28:35 -- common/autotest_common.sh@819 -- # '[' -z 97471 ']' 00:22:33.514 07:28:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:33.514 07:28:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:33.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:33.514 07:28:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:33.514 07:28:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:33.514 07:28:35 -- common/autotest_common.sh@10 -- # set +x 00:22:33.776 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:33.776 Zero copy mechanism will not be used. 00:22:33.776 [2024-11-04 07:28:35.385109] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:22:33.776 [2024-11-04 07:28:35.385209] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97471 ] 00:22:33.776 [2024-11-04 07:28:35.524133] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.776 [2024-11-04 07:28:35.578824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.712 07:28:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:34.712 07:28:36 -- common/autotest_common.sh@852 -- # return 0 00:22:34.712 07:28:36 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:34.712 07:28:36 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:34.971 07:28:36 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:34.971 07:28:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:34.971 07:28:36 -- common/autotest_common.sh@10 -- # set +x 00:22:34.971 07:28:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:34.971 07:28:36 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:34.971 07:28:36 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:35.230 nvme0n1 00:22:35.230 07:28:36 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:35.230 07:28:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:35.230 07:28:36 -- common/autotest_common.sh@10 -- # set +x 00:22:35.230 07:28:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:35.230 07:28:36 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:35.230 07:28:36 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:35.230 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:35.230 Zero copy mechanism will not be used. 00:22:35.230 Running I/O for 2 seconds... 00:22:35.230 [2024-11-04 07:28:37.031312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.230 [2024-11-04 07:28:37.031370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.230 [2024-11-04 07:28:37.031384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.230 [2024-11-04 07:28:37.035507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.230 [2024-11-04 07:28:37.035539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.230 [2024-11-04 07:28:37.035550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.230 [2024-11-04 07:28:37.039601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.230 [2024-11-04 07:28:37.039632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.230 [2024-11-04 07:28:37.039643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.230 [2024-11-04 07:28:37.043617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.230 [2024-11-04 07:28:37.043649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.230 [2024-11-04 07:28:37.043661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.230 [2024-11-04 07:28:37.047532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.230 [2024-11-04 07:28:37.047563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.230 [2024-11-04 07:28:37.047574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.230 [2024-11-04 07:28:37.051402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.230 [2024-11-04 07:28:37.051433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.230 [2024-11-04 07:28:37.051444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.230 [2024-11-04 07:28:37.055358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.230 [2024-11-04 07:28:37.055389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.230 [2024-11-04 07:28:37.055400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.230 [2024-11-04 07:28:37.058985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.230 [2024-11-04 07:28:37.059028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.231 [2024-11-04 07:28:37.059039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.231 [2024-11-04 07:28:37.062458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.231 [2024-11-04 07:28:37.062488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.231 [2024-11-04 07:28:37.062499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.231 [2024-11-04 07:28:37.066047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.231 [2024-11-04 07:28:37.066089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.231 [2024-11-04 07:28:37.066101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.491 [2024-11-04 07:28:37.070034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.491 [2024-11-04 07:28:37.070076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-11-04 07:28:37.070087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.491 [2024-11-04 07:28:37.073822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.491 [2024-11-04 07:28:37.073854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-11-04 07:28:37.073865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.491 [2024-11-04 07:28:37.077741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.491 [2024-11-04 07:28:37.077771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-11-04 07:28:37.077783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.491 [2024-11-04 07:28:37.081325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.491 [2024-11-04 07:28:37.081355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-11-04 07:28:37.081366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.491 [2024-11-04 07:28:37.084800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.491 [2024-11-04 07:28:37.084829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-11-04 07:28:37.084840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.491 [2024-11-04 07:28:37.088834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.491 [2024-11-04 07:28:37.088866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-11-04 07:28:37.088889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.491 [2024-11-04 07:28:37.092411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.491 [2024-11-04 07:28:37.092442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-11-04 07:28:37.092452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.491 [2024-11-04 07:28:37.096204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.491 [2024-11-04 07:28:37.096247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-11-04 07:28:37.096258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.491 [2024-11-04 07:28:37.099597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.491 [2024-11-04 07:28:37.099627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-11-04 07:28:37.099639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.491 [2024-11-04 07:28:37.102813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.491 [2024-11-04 07:28:37.102843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-11-04 07:28:37.102854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.492 [2024-11-04 07:28:37.106935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.492 [2024-11-04 07:28:37.106976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.492 [2024-11-04 07:28:37.106988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.492 [2024-11-04 07:28:37.110743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.492 [2024-11-04 07:28:37.110774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.492 [2024-11-04 07:28:37.110785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.492 [2024-11-04 07:28:37.114637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.492 [2024-11-04 07:28:37.114679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.492 [2024-11-04 07:28:37.114691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.492 [2024-11-04 07:28:37.117546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.492 [2024-11-04 07:28:37.117576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.492 [2024-11-04 07:28:37.117587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.492 [2024-11-04 07:28:37.122020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.492 [2024-11-04 07:28:37.122049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.492 [2024-11-04 07:28:37.122060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.492 [2024-11-04 07:28:37.125898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.492 [2024-11-04 07:28:37.125927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.492 [2024-11-04 07:28:37.125937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.492 [2024-11-04 07:28:37.129417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.492 [2024-11-04 07:28:37.129447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.492 [2024-11-04 07:28:37.129458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.492 [2024-11-04 07:28:37.132677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.492 [2024-11-04 07:28:37.132706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.492 [2024-11-04 07:28:37.132717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.492 [2024-11-04 07:28:37.135613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.492 [2024-11-04 07:28:37.135643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.492 [2024-11-04 07:28:37.135654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.492 [2024-11-04 07:28:37.139157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.492 [2024-11-04 07:28:37.139200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.492 [2024-11-04 07:28:37.139211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.492 [2024-11-04 07:28:37.142929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.492 [2024-11-04 07:28:37.142958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.492 [2024-11-04 07:28:37.142969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.492 [2024-11-04 07:28:37.146519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.492 [2024-11-04 07:28:37.146548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.492 [2024-11-04 07:28:37.146580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.492 [2024-11-04 07:28:37.150150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.492 [2024-11-04 07:28:37.150179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.492 [2024-11-04 07:28:37.150190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.492 [2024-11-04 07:28:37.154037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.492 [2024-11-04 07:28:37.154066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.492 [2024-11-04 07:28:37.154077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.492 [2024-11-04 07:28:37.157668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.492 [2024-11-04 07:28:37.157699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.492 [2024-11-04 07:28:37.157710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.492 [2024-11-04 07:28:37.161593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.492 [2024-11-04 07:28:37.161623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.492 [2024-11-04 07:28:37.161635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.492 [2024-11-04 07:28:37.165593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.492 [2024-11-04 07:28:37.165622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.492 [2024-11-04 07:28:37.165632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.492 [2024-11-04 07:28:37.168998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.492 [2024-11-04 07:28:37.169039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.492 [2024-11-04 07:28:37.169050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.492 [2024-11-04 07:28:37.172638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.492 [2024-11-04 07:28:37.172667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.492 [2024-11-04 07:28:37.172677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.492 [2024-11-04 07:28:37.176658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.492 [2024-11-04 07:28:37.176690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.492 [2024-11-04 07:28:37.176701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.492 [2024-11-04 07:28:37.179687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.492 [2024-11-04 07:28:37.179716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.492 [2024-11-04 07:28:37.179727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.492 [2024-11-04 07:28:37.182916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.492 [2024-11-04 07:28:37.182945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.492 [2024-11-04 07:28:37.182956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.492 [2024-11-04 07:28:37.187299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.492 [2024-11-04 07:28:37.187328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.492 [2024-11-04 07:28:37.187338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.492 [2024-11-04 07:28:37.190626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.492 [2024-11-04 07:28:37.190667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.492 [2024-11-04 07:28:37.190677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.492 [2024-11-04 07:28:37.194614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.492 [2024-11-04 07:28:37.194643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.492 [2024-11-04 07:28:37.194657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.492 [2024-11-04 07:28:37.198209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.492 [2024-11-04 07:28:37.198237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.492 [2024-11-04 07:28:37.198248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.492 [2024-11-04 07:28:37.202252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.492 [2024-11-04 07:28:37.202283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.492 [2024-11-04 07:28:37.202294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.492 [2024-11-04 07:28:37.206249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.493 [2024-11-04 07:28:37.206278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.493 [2024-11-04 07:28:37.206289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.493 [2024-11-04 07:28:37.209926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.493 [2024-11-04 07:28:37.209956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.493 [2024-11-04 07:28:37.209967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.493 [2024-11-04 07:28:37.213503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.493 [2024-11-04 07:28:37.213533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.493 [2024-11-04 07:28:37.213543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.493 [2024-11-04 07:28:37.217428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.493 [2024-11-04 07:28:37.217459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.493 [2024-11-04 07:28:37.217470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.493 [2024-11-04 07:28:37.221502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.493 [2024-11-04 07:28:37.221532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.493 [2024-11-04 07:28:37.221543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.493 [2024-11-04 07:28:37.224906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.493 [2024-11-04 07:28:37.224947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.493 [2024-11-04 07:28:37.224958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.493 [2024-11-04 07:28:37.229058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.493 [2024-11-04 07:28:37.229087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.493 [2024-11-04 07:28:37.229099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.493 [2024-11-04 07:28:37.232758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.493 [2024-11-04 07:28:37.232788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.493 [2024-11-04 07:28:37.232798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.493 [2024-11-04 07:28:37.236150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.493 [2024-11-04 07:28:37.236180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.493 [2024-11-04 07:28:37.236191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.493 [2024-11-04 07:28:37.239133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.493 [2024-11-04 07:28:37.239175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.493 [2024-11-04 07:28:37.239185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.493 [2024-11-04 07:28:37.243041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.493 [2024-11-04 07:28:37.243070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.493 [2024-11-04 07:28:37.243082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.493 [2024-11-04 07:28:37.246993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.493 [2024-11-04 07:28:37.247022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.493 [2024-11-04 07:28:37.247032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.493 [2024-11-04 07:28:37.250679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.493 [2024-11-04 07:28:37.250721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.493 [2024-11-04 07:28:37.250732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.493 [2024-11-04 07:28:37.254021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.493 [2024-11-04 07:28:37.254050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.493 [2024-11-04 07:28:37.254060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.493 [2024-11-04 07:28:37.257482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.493 [2024-11-04 07:28:37.257512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.493 [2024-11-04 07:28:37.257522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.493 [2024-11-04 07:28:37.261551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.493 [2024-11-04 07:28:37.261580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.493 [2024-11-04 07:28:37.261591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.493 [2024-11-04 07:28:37.265425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.493 [2024-11-04 07:28:37.265454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.493 [2024-11-04 07:28:37.265465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.493 [2024-11-04 07:28:37.269156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.493 [2024-11-04 07:28:37.269186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.493 [2024-11-04 07:28:37.269197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.493 [2024-11-04 07:28:37.271943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.493 [2024-11-04 07:28:37.271983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.493 [2024-11-04 07:28:37.271993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.493 [2024-11-04 07:28:37.275929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.493 [2024-11-04 07:28:37.275957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.493 [2024-11-04 07:28:37.275968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.493 [2024-11-04 07:28:37.280016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.493 [2024-11-04 07:28:37.280043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.493 [2024-11-04 07:28:37.280055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.493 [2024-11-04 07:28:37.283605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.493 [2024-11-04 07:28:37.283633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.493 [2024-11-04 07:28:37.283645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.493 [2024-11-04 07:28:37.288063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.493 [2024-11-04 07:28:37.288106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.493 [2024-11-04 07:28:37.288117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.493 [2024-11-04 07:28:37.291740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.493 [2024-11-04 07:28:37.291775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.493 [2024-11-04 07:28:37.291798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.493 [2024-11-04 07:28:37.295915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.493 [2024-11-04 07:28:37.295955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.493 [2024-11-04 07:28:37.295966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.493 [2024-11-04 07:28:37.299574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.493 [2024-11-04 07:28:37.299605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.493 [2024-11-04 07:28:37.299616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.493 [2024-11-04 07:28:37.303254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.493 [2024-11-04 07:28:37.303283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.493 [2024-11-04 07:28:37.303293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.493 [2024-11-04 07:28:37.307688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.494 [2024-11-04 07:28:37.307716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.494 [2024-11-04 07:28:37.307727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.494 [2024-11-04 07:28:37.311085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.494 [2024-11-04 07:28:37.311127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.494 [2024-11-04 07:28:37.311138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.494 [2024-11-04 07:28:37.315145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.494 [2024-11-04 07:28:37.315175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.494 [2024-11-04 07:28:37.315186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.494 [2024-11-04 07:28:37.318970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.494 [2024-11-04 07:28:37.319023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.494 [2024-11-04 07:28:37.319035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.494 [2024-11-04 07:28:37.323093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.494 [2024-11-04 07:28:37.323122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.494 [2024-11-04 07:28:37.323132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.494 [2024-11-04 07:28:37.326709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.494 [2024-11-04 07:28:37.326738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.494 [2024-11-04 07:28:37.326749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.755 [2024-11-04 07:28:37.331325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.755 [2024-11-04 07:28:37.331354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.755 [2024-11-04 07:28:37.331365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.755 [2024-11-04 07:28:37.334994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.755 [2024-11-04 07:28:37.335035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.755 [2024-11-04 07:28:37.335046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.755 [2024-11-04 07:28:37.338797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.755 [2024-11-04 07:28:37.338827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.755 [2024-11-04 07:28:37.338838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.755 [2024-11-04 07:28:37.342017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.755 [2024-11-04 07:28:37.342045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.755 [2024-11-04 07:28:37.342057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.755 [2024-11-04 07:28:37.345598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.755 [2024-11-04 07:28:37.345629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.755 [2024-11-04 07:28:37.345640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.755 [2024-11-04 07:28:37.349456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.755 [2024-11-04 07:28:37.349498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.755 [2024-11-04 07:28:37.349509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.755 [2024-11-04 07:28:37.353598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.755 [2024-11-04 07:28:37.353640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.755 [2024-11-04 07:28:37.353651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.755 [2024-11-04 07:28:37.357647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.755 [2024-11-04 07:28:37.357675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.755 [2024-11-04 07:28:37.357686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.755 [2024-11-04 07:28:37.361316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.755 [2024-11-04 07:28:37.361345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.755 [2024-11-04 07:28:37.361357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.755 [2024-11-04 07:28:37.364727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.755 [2024-11-04 07:28:37.364757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.755 [2024-11-04 07:28:37.364769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.755 [2024-11-04 07:28:37.368614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.755 [2024-11-04 07:28:37.368656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.755 [2024-11-04 07:28:37.368667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.755 [2024-11-04 07:28:37.373155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.755 [2024-11-04 07:28:37.373197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.755 [2024-11-04 07:28:37.373208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.755 [2024-11-04 07:28:37.376515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.755 [2024-11-04 07:28:37.376544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.755 [2024-11-04 07:28:37.376555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.755 [2024-11-04 07:28:37.380365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.755 [2024-11-04 07:28:37.380407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.755 [2024-11-04 07:28:37.380418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.755 [2024-11-04 07:28:37.383331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.755 [2024-11-04 07:28:37.383373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.755 [2024-11-04 07:28:37.383384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.755 [2024-11-04 07:28:37.386944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.755 [2024-11-04 07:28:37.386984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.755 [2024-11-04 07:28:37.386995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.755 [2024-11-04 07:28:37.390822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.755 [2024-11-04 07:28:37.390852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.755 [2024-11-04 07:28:37.390865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.755 [2024-11-04 07:28:37.394911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.755 [2024-11-04 07:28:37.394951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.755 [2024-11-04 07:28:37.394973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.755 [2024-11-04 07:28:37.398584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.755 [2024-11-04 07:28:37.398621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.755 [2024-11-04 07:28:37.398636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.755 [2024-11-04 07:28:37.402777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.755 [2024-11-04 07:28:37.402820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.755 [2024-11-04 07:28:37.402831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.755 [2024-11-04 07:28:37.406720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.755 [2024-11-04 07:28:37.406750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.756 [2024-11-04 07:28:37.406761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.756 [2024-11-04 07:28:37.410118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.756 [2024-11-04 07:28:37.410160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.756 [2024-11-04 07:28:37.410171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.756 [2024-11-04 07:28:37.414421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.756 [2024-11-04 07:28:37.414463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.756 [2024-11-04 07:28:37.414474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.756 [2024-11-04 07:28:37.417375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.756 [2024-11-04 07:28:37.417417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.756 [2024-11-04 07:28:37.417429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.756 [2024-11-04 07:28:37.422167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.756 [2024-11-04 07:28:37.422209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.756 [2024-11-04 07:28:37.422220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.756 [2024-11-04 07:28:37.425491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.756 [2024-11-04 07:28:37.425533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.756 [2024-11-04 07:28:37.425544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.756 [2024-11-04 07:28:37.429402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.756 [2024-11-04 07:28:37.429444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.756 [2024-11-04 07:28:37.429455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.756 [2024-11-04 07:28:37.432699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.756 [2024-11-04 07:28:37.432742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.756 [2024-11-04 07:28:37.432753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.756 [2024-11-04 07:28:37.436505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.756 [2024-11-04 07:28:37.436546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.756 [2024-11-04 07:28:37.436557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.756 [2024-11-04 07:28:37.440571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.756 [2024-11-04 07:28:37.440600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.756 [2024-11-04 07:28:37.440611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.756 [2024-11-04 07:28:37.445051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.756 [2024-11-04 07:28:37.445079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.756 [2024-11-04 07:28:37.445090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.756 [2024-11-04 07:28:37.448247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.756 [2024-11-04 07:28:37.448277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.756 [2024-11-04 07:28:37.448287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.756 [2024-11-04 07:28:37.452041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.756 [2024-11-04 07:28:37.452083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.756 [2024-11-04 07:28:37.452094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.756 [2024-11-04 07:28:37.455761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.756 [2024-11-04 07:28:37.455803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.756 [2024-11-04 07:28:37.455814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.756 [2024-11-04 07:28:37.459282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.756 [2024-11-04 07:28:37.459324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.756 [2024-11-04 07:28:37.459335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.756 [2024-11-04 07:28:37.463133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.756 [2024-11-04 07:28:37.463167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.756 [2024-11-04 07:28:37.463189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.756 [2024-11-04 07:28:37.466453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.756 [2024-11-04 07:28:37.466495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.756 [2024-11-04 07:28:37.466505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.756 [2024-11-04 07:28:37.470542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.756 [2024-11-04 07:28:37.470586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.756 [2024-11-04 07:28:37.470608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.756 [2024-11-04 07:28:37.474419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.756 [2024-11-04 07:28:37.474448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.756 [2024-11-04 07:28:37.474459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.756 [2024-11-04 07:28:37.477423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.756 [2024-11-04 07:28:37.477453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.756 [2024-11-04 07:28:37.477464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.756 [2024-11-04 07:28:37.481934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.756 [2024-11-04 07:28:37.481976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.756 [2024-11-04 07:28:37.481986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.756 [2024-11-04 07:28:37.485389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.756 [2024-11-04 07:28:37.485419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.756 [2024-11-04 07:28:37.485430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.756 [2024-11-04 07:28:37.488900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.756 [2024-11-04 07:28:37.488941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.756 [2024-11-04 07:28:37.488952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.756 [2024-11-04 07:28:37.492251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.756 [2024-11-04 07:28:37.492287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.756 [2024-11-04 07:28:37.492307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.756 [2024-11-04 07:28:37.495681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.756 [2024-11-04 07:28:37.495711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.756 [2024-11-04 07:28:37.495721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.756 [2024-11-04 07:28:37.499221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.756 [2024-11-04 07:28:37.499251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.756 [2024-11-04 07:28:37.499262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.756 [2024-11-04 07:28:37.503143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.756 [2024-11-04 07:28:37.503173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.756 [2024-11-04 07:28:37.503184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.756 [2024-11-04 07:28:37.506188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.756 [2024-11-04 07:28:37.506230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.756 [2024-11-04 07:28:37.506252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.757 [2024-11-04 07:28:37.509707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.757 [2024-11-04 07:28:37.509737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.757 [2024-11-04 07:28:37.509747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.757 [2024-11-04 07:28:37.512957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.757 [2024-11-04 07:28:37.512986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.757 [2024-11-04 07:28:37.512996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.757 [2024-11-04 07:28:37.516840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.757 [2024-11-04 07:28:37.516869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.757 [2024-11-04 07:28:37.516892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.757 [2024-11-04 07:28:37.519764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.757 [2024-11-04 07:28:37.519792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.757 [2024-11-04 07:28:37.519802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.757 [2024-11-04 07:28:37.523537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.757 [2024-11-04 07:28:37.523567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.757 [2024-11-04 07:28:37.523579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.757 [2024-11-04 07:28:37.526888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.757 [2024-11-04 07:28:37.526918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.757 [2024-11-04 07:28:37.526929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.757 [2024-11-04 07:28:37.530464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.757 [2024-11-04 07:28:37.530492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.757 [2024-11-04 07:28:37.530503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.757 [2024-11-04 07:28:37.534308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.757 [2024-11-04 07:28:37.534337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.757 [2024-11-04 07:28:37.534348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.757 [2024-11-04 07:28:37.537920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.757 [2024-11-04 07:28:37.537961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.757 [2024-11-04 07:28:37.537972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.757 [2024-11-04 07:28:37.541612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.757 [2024-11-04 07:28:37.541642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.757 [2024-11-04 07:28:37.541652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.757 [2024-11-04 07:28:37.545250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.757 [2024-11-04 07:28:37.545280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.757 [2024-11-04 07:28:37.545291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.757 [2024-11-04 07:28:37.549127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.757 [2024-11-04 07:28:37.549155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.757 [2024-11-04 07:28:37.549166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.757 [2024-11-04 07:28:37.552012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.757 [2024-11-04 07:28:37.552053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.757 [2024-11-04 07:28:37.552064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.757 [2024-11-04 07:28:37.555506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.757 [2024-11-04 07:28:37.555536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.757 [2024-11-04 07:28:37.555547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.757 [2024-11-04 07:28:37.558907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.757 [2024-11-04 07:28:37.558936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.757 [2024-11-04 07:28:37.558947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.757 [2024-11-04 07:28:37.562756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.757 [2024-11-04 07:28:37.562786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.757 [2024-11-04 07:28:37.562797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.757 [2024-11-04 07:28:37.566332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.757 [2024-11-04 07:28:37.566362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.757 [2024-11-04 07:28:37.566372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.757 [2024-11-04 07:28:37.570231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.757 [2024-11-04 07:28:37.570260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.757 [2024-11-04 07:28:37.570271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.757 [2024-11-04 07:28:37.573421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.757 [2024-11-04 07:28:37.573458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.757 [2024-11-04 07:28:37.573469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.757 [2024-11-04 07:28:37.577312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.757 [2024-11-04 07:28:37.577345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.757 [2024-11-04 07:28:37.577357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.757 [2024-11-04 07:28:37.580911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.757 [2024-11-04 07:28:37.580944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.757 [2024-11-04 07:28:37.580956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.757 [2024-11-04 07:28:37.584491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.757 [2024-11-04 07:28:37.584525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.757 [2024-11-04 07:28:37.584537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.757 [2024-11-04 07:28:37.587982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:35.757 [2024-11-04 07:28:37.588234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.757 [2024-11-04 07:28:37.588252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.018 [2024-11-04 07:28:37.592068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.018 [2024-11-04 07:28:37.592227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.018 [2024-11-04 07:28:37.592243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.018 [2024-11-04 07:28:37.596738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.018 [2024-11-04 07:28:37.596773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.018 [2024-11-04 07:28:37.596785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.018 [2024-11-04 07:28:37.600517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.018 [2024-11-04 07:28:37.600551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.018 [2024-11-04 07:28:37.600570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.018 [2024-11-04 07:28:37.604587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.018 [2024-11-04 07:28:37.604621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.018 [2024-11-04 07:28:37.604633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.018 [2024-11-04 07:28:37.608234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.018 [2024-11-04 07:28:37.608268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.018 [2024-11-04 07:28:37.608280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.018 [2024-11-04 07:28:37.611786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.018 [2024-11-04 07:28:37.611976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.018 [2024-11-04 07:28:37.611998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.018 [2024-11-04 07:28:37.616242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.018 [2024-11-04 07:28:37.616275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.018 [2024-11-04 07:28:37.616286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.018 [2024-11-04 07:28:37.620125] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.018 [2024-11-04 07:28:37.620159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.018 [2024-11-04 07:28:37.620171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.018 [2024-11-04 07:28:37.623217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.018 [2024-11-04 07:28:37.623252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.018 [2024-11-04 07:28:37.623264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.018 [2024-11-04 07:28:37.626971] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.018 [2024-11-04 07:28:37.627006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.018 [2024-11-04 07:28:37.627024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.018 [2024-11-04 07:28:37.631024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.018 [2024-11-04 07:28:37.631057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.018 [2024-11-04 07:28:37.631077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.018 [2024-11-04 07:28:37.634580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.019 [2024-11-04 07:28:37.634624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.019 [2024-11-04 07:28:37.634644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.019 [2024-11-04 07:28:37.638349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.019 [2024-11-04 07:28:37.638384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.019 [2024-11-04 07:28:37.638403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.019 [2024-11-04 07:28:37.642421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.019 [2024-11-04 07:28:37.642457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.019 [2024-11-04 07:28:37.642475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.019 [2024-11-04 07:28:37.647026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.019 [2024-11-04 07:28:37.647061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.019 [2024-11-04 07:28:37.647082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.019 [2024-11-04 07:28:37.650891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.019 [2024-11-04 07:28:37.650935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.019 [2024-11-04 07:28:37.650946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.019 [2024-11-04 07:28:37.653991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.019 [2024-11-04 07:28:37.654024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.019 [2024-11-04 07:28:37.654043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.019 [2024-11-04 07:28:37.658151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.019 [2024-11-04 07:28:37.658184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.019 [2024-11-04 07:28:37.658203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.019 [2024-11-04 07:28:37.662365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.019 [2024-11-04 07:28:37.662399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.019 [2024-11-04 07:28:37.662411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.019 [2024-11-04 07:28:37.666869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.019 [2024-11-04 07:28:37.666923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.019 [2024-11-04 07:28:37.666943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.019 [2024-11-04 07:28:37.671370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.019 [2024-11-04 07:28:37.671404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.019 [2024-11-04 07:28:37.671416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.019 [2024-11-04 07:28:37.675590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.019 [2024-11-04 07:28:37.675625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.019 [2024-11-04 07:28:37.675637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.019 [2024-11-04 07:28:37.678269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.019 [2024-11-04 07:28:37.678303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.019 [2024-11-04 07:28:37.678315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.019 [2024-11-04 07:28:37.682308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.019 [2024-11-04 07:28:37.682343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.019 [2024-11-04 07:28:37.682355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.019 [2024-11-04 07:28:37.686066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.019 [2024-11-04 07:28:37.686209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.019 [2024-11-04 07:28:37.686228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.019 [2024-11-04 07:28:37.689343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.019 [2024-11-04 07:28:37.689393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.019 [2024-11-04 07:28:37.689406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.019 [2024-11-04 07:28:37.693240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.019 [2024-11-04 07:28:37.693274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.019 [2024-11-04 07:28:37.693295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.019 [2024-11-04 07:28:37.697342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.019 [2024-11-04 07:28:37.697376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.019 [2024-11-04 07:28:37.697388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.019 [2024-11-04 07:28:37.701356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.019 [2024-11-04 07:28:37.701391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.019 [2024-11-04 07:28:37.701403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.019 [2024-11-04 07:28:37.705063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.019 [2024-11-04 07:28:37.705097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.019 [2024-11-04 07:28:37.705109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.019 [2024-11-04 07:28:37.708122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.019 [2024-11-04 07:28:37.708155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.019 [2024-11-04 07:28:37.708167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.019 [2024-11-04 07:28:37.711861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.019 [2024-11-04 07:28:37.711919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.019 [2024-11-04 07:28:37.711940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.019 [2024-11-04 07:28:37.715079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.019 [2024-11-04 07:28:37.715226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.019 [2024-11-04 07:28:37.715245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.019 [2024-11-04 07:28:37.718868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.019 [2024-11-04 07:28:37.719082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.019 [2024-11-04 07:28:37.719245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.019 [2024-11-04 07:28:37.722537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.019 [2024-11-04 07:28:37.722749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.019 [2024-11-04 07:28:37.722866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.019 [2024-11-04 07:28:37.726357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.019 [2024-11-04 07:28:37.726508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.019 [2024-11-04 07:28:37.726655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.019 [2024-11-04 07:28:37.730276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.019 [2024-11-04 07:28:37.730427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.019 [2024-11-04 07:28:37.730539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.019 [2024-11-04 07:28:37.734485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.019 [2024-11-04 07:28:37.734666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.019 [2024-11-04 07:28:37.734780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.019 [2024-11-04 07:28:37.738831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.020 [2024-11-04 07:28:37.739023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.020 [2024-11-04 07:28:37.739137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.020 [2024-11-04 07:28:37.743007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.020 [2024-11-04 07:28:37.743199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.020 [2024-11-04 07:28:37.743345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.020 [2024-11-04 07:28:37.747442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.020 [2024-11-04 07:28:37.747596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.020 [2024-11-04 07:28:37.747704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.020 [2024-11-04 07:28:37.751253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.020 [2024-11-04 07:28:37.751405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.020 [2024-11-04 07:28:37.751518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.020 [2024-11-04 07:28:37.754757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.020 [2024-11-04 07:28:37.754952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.020 [2024-11-04 07:28:37.755083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.020 [2024-11-04 07:28:37.758903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.020 [2024-11-04 07:28:37.759073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.020 [2024-11-04 07:28:37.759092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.020 [2024-11-04 07:28:37.763370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.020 [2024-11-04 07:28:37.763405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.020 [2024-11-04 07:28:37.763417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.020 [2024-11-04 07:28:37.767634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.020 [2024-11-04 07:28:37.767669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.020 [2024-11-04 07:28:37.767681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.020 [2024-11-04 07:28:37.771024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.020 [2024-11-04 07:28:37.771058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.020 [2024-11-04 07:28:37.771077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.020 [2024-11-04 07:28:37.774604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.020 [2024-11-04 07:28:37.774639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.020 [2024-11-04 07:28:37.774660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.020 [2024-11-04 07:28:37.778102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.020 [2024-11-04 07:28:37.778136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.020 [2024-11-04 07:28:37.778148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.020 [2024-11-04 07:28:37.782231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.020 [2024-11-04 07:28:37.782267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.020 [2024-11-04 07:28:37.782279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.020 [2024-11-04 07:28:37.785623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.020 [2024-11-04 07:28:37.785793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.020 [2024-11-04 07:28:37.785811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.020 [2024-11-04 07:28:37.789226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.020 [2024-11-04 07:28:37.789255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.020 [2024-11-04 07:28:37.789268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.020 [2024-11-04 07:28:37.792847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.020 [2024-11-04 07:28:37.792891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.020 [2024-11-04 07:28:37.792904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.020 [2024-11-04 07:28:37.796081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.020 [2024-11-04 07:28:37.796114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.020 [2024-11-04 07:28:37.796126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.020 [2024-11-04 07:28:37.799530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.020 [2024-11-04 07:28:37.799563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.020 [2024-11-04 07:28:37.799575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.020 [2024-11-04 07:28:37.803395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.020 [2024-11-04 07:28:37.803430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.020 [2024-11-04 07:28:37.803443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.020 [2024-11-04 07:28:37.806569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.020 [2024-11-04 07:28:37.806611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.020 [2024-11-04 07:28:37.806633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.020 [2024-11-04 07:28:37.810214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.020 [2024-11-04 07:28:37.810247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.020 [2024-11-04 07:28:37.810265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.020 [2024-11-04 07:28:37.813849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.020 [2024-11-04 07:28:37.814006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.020 [2024-11-04 07:28:37.814024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.020 [2024-11-04 07:28:37.817821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.020 [2024-11-04 07:28:37.817993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.020 [2024-11-04 07:28:37.818011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.020 [2024-11-04 07:28:37.821775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.020 [2024-11-04 07:28:37.821921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.020 [2024-11-04 07:28:37.821941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.020 [2024-11-04 07:28:37.825291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.020 [2024-11-04 07:28:37.825442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.020 [2024-11-04 07:28:37.825460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.020 [2024-11-04 07:28:37.828987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.020 [2024-11-04 07:28:37.829017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.020 [2024-11-04 07:28:37.829029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.020 [2024-11-04 07:28:37.833044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.020 [2024-11-04 07:28:37.833079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.020 [2024-11-04 07:28:37.833090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.020 [2024-11-04 07:28:37.836327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.020 [2024-11-04 07:28:37.836361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.020 [2024-11-04 07:28:37.836373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.020 [2024-11-04 07:28:37.839773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.020 [2024-11-04 07:28:37.839808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.021 [2024-11-04 07:28:37.839819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.021 [2024-11-04 07:28:37.843694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.021 [2024-11-04 07:28:37.843729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.021 [2024-11-04 07:28:37.843742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.021 [2024-11-04 07:28:37.847660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.021 [2024-11-04 07:28:37.847693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.021 [2024-11-04 07:28:37.847705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.021 [2024-11-04 07:28:37.851236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.021 [2024-11-04 07:28:37.851271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.021 [2024-11-04 07:28:37.851289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.021 [2024-11-04 07:28:37.855531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.021 [2024-11-04 07:28:37.855564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.021 [2024-11-04 07:28:37.855576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.281 [2024-11-04 07:28:37.859525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.281 [2024-11-04 07:28:37.859560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.281 [2024-11-04 07:28:37.859572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.281 [2024-11-04 07:28:37.863060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.281 [2024-11-04 07:28:37.863094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.281 [2024-11-04 07:28:37.863112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.281 [2024-11-04 07:28:37.866646] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.281 [2024-11-04 07:28:37.866802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.281 [2024-11-04 07:28:37.866824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.281 [2024-11-04 07:28:37.870437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.281 [2024-11-04 07:28:37.870473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.281 [2024-11-04 07:28:37.870491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.281 [2024-11-04 07:28:37.874059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.281 [2024-11-04 07:28:37.874208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.281 [2024-11-04 07:28:37.874345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.281 [2024-11-04 07:28:37.878451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.281 [2024-11-04 07:28:37.878626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.281 [2024-11-04 07:28:37.878774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.281 [2024-11-04 07:28:37.882177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.281 [2024-11-04 07:28:37.882321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.281 [2024-11-04 07:28:37.882339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.281 [2024-11-04 07:28:37.885896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.281 [2024-11-04 07:28:37.885931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.281 [2024-11-04 07:28:37.885951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.281 [2024-11-04 07:28:37.890004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.281 [2024-11-04 07:28:37.890038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.281 [2024-11-04 07:28:37.890050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.281 [2024-11-04 07:28:37.893496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.281 [2024-11-04 07:28:37.893531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.281 [2024-11-04 07:28:37.893543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.281 [2024-11-04 07:28:37.897449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.281 [2024-11-04 07:28:37.897483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.281 [2024-11-04 07:28:37.897495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.281 [2024-11-04 07:28:37.900788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.282 [2024-11-04 07:28:37.900966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.282 [2024-11-04 07:28:37.900984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.282 [2024-11-04 07:28:37.904205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.282 [2024-11-04 07:28:37.904242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.282 [2024-11-04 07:28:37.904253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.282 [2024-11-04 07:28:37.908125] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.282 [2024-11-04 07:28:37.908159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.282 [2024-11-04 07:28:37.908171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.282 [2024-11-04 07:28:37.911674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.282 [2024-11-04 07:28:37.911832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.282 [2024-11-04 07:28:37.911849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.282 [2024-11-04 07:28:37.915816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.282 [2024-11-04 07:28:37.915982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.282 [2024-11-04 07:28:37.916001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.282 [2024-11-04 07:28:37.919396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.282 [2024-11-04 07:28:37.919432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.282 [2024-11-04 07:28:37.919453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.282 [2024-11-04 07:28:37.923302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.282 [2024-11-04 07:28:37.923336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.282 [2024-11-04 07:28:37.923355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.282 [2024-11-04 07:28:37.927019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.282 [2024-11-04 07:28:37.927054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.282 [2024-11-04 07:28:37.927066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.282 [2024-11-04 07:28:37.930425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.282 [2024-11-04 07:28:37.930460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.282 [2024-11-04 07:28:37.930471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.282 [2024-11-04 07:28:37.934404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.282 [2024-11-04 07:28:37.934438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.282 [2024-11-04 07:28:37.934450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.282 [2024-11-04 07:28:37.937996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.282 [2024-11-04 07:28:37.938029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.282 [2024-11-04 07:28:37.938040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.282 [2024-11-04 07:28:37.941367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.282 [2024-11-04 07:28:37.941402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.282 [2024-11-04 07:28:37.941413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.282 [2024-11-04 07:28:37.944918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.282 [2024-11-04 07:28:37.944952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.282 [2024-11-04 07:28:37.944963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.282 [2024-11-04 07:28:37.948590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.282 [2024-11-04 07:28:37.948623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.282 [2024-11-04 07:28:37.948635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.282 [2024-11-04 07:28:37.952030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.282 [2024-11-04 07:28:37.952065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.282 [2024-11-04 07:28:37.952084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.282 [2024-11-04 07:28:37.955972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.282 [2024-11-04 07:28:37.956005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.282 [2024-11-04 07:28:37.956024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.282 [2024-11-04 07:28:37.959537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.282 [2024-11-04 07:28:37.959696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.282 [2024-11-04 07:28:37.959715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.282 [2024-11-04 07:28:37.963970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.282 [2024-11-04 07:28:37.964005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.282 [2024-11-04 07:28:37.964024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.282 [2024-11-04 07:28:37.966668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.282 [2024-11-04 07:28:37.966702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.282 [2024-11-04 07:28:37.966721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.282 [2024-11-04 07:28:37.970785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.282 [2024-11-04 07:28:37.970821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.282 [2024-11-04 07:28:37.970843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.282 [2024-11-04 07:28:37.973698] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.282 [2024-11-04 07:28:37.973731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.282 [2024-11-04 07:28:37.973742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.282 [2024-11-04 07:28:37.978309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.282 [2024-11-04 07:28:37.978345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.282 [2024-11-04 07:28:37.978357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.282 [2024-11-04 07:28:37.981683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.282 [2024-11-04 07:28:37.981838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.282 [2024-11-04 07:28:37.981859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.282 [2024-11-04 07:28:37.985301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.282 [2024-11-04 07:28:37.985336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.282 [2024-11-04 07:28:37.985348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.282 [2024-11-04 07:28:37.989532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.282 [2024-11-04 07:28:37.989566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.282 [2024-11-04 07:28:37.989578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.282 [2024-11-04 07:28:37.993524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.282 [2024-11-04 07:28:37.993558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.282 [2024-11-04 07:28:37.993570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.282 [2024-11-04 07:28:37.996846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.282 [2024-11-04 07:28:37.996890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.282 [2024-11-04 07:28:37.996902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.282 [2024-11-04 07:28:38.000701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.282 [2024-11-04 07:28:38.000735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.283 [2024-11-04 07:28:38.000747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.283 [2024-11-04 07:28:38.004519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.283 [2024-11-04 07:28:38.004552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.283 [2024-11-04 07:28:38.004564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.283 [2024-11-04 07:28:38.007384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.283 [2024-11-04 07:28:38.007418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.283 [2024-11-04 07:28:38.007430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.283 [2024-11-04 07:28:38.011472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.283 [2024-11-04 07:28:38.011503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.283 [2024-11-04 07:28:38.011514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.283 [2024-11-04 07:28:38.014968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.283 [2024-11-04 07:28:38.014997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.283 [2024-11-04 07:28:38.015009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.283 [2024-11-04 07:28:38.018527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.283 [2024-11-04 07:28:38.018555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.283 [2024-11-04 07:28:38.018575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.283 [2024-11-04 07:28:38.022020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.283 [2024-11-04 07:28:38.022060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.283 [2024-11-04 07:28:38.022071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.283 [2024-11-04 07:28:38.025972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.283 [2024-11-04 07:28:38.026001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.283 [2024-11-04 07:28:38.026013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.283 [2024-11-04 07:28:38.029520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.283 [2024-11-04 07:28:38.029549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.283 [2024-11-04 07:28:38.029560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.283 [2024-11-04 07:28:38.033478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.283 [2024-11-04 07:28:38.033507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.283 [2024-11-04 07:28:38.033517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.283 [2024-11-04 07:28:38.037085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.283 [2024-11-04 07:28:38.037116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.283 [2024-11-04 07:28:38.037127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.283 [2024-11-04 07:28:38.040426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.283 [2024-11-04 07:28:38.040456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.283 [2024-11-04 07:28:38.040467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.283 [2024-11-04 07:28:38.043798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.283 [2024-11-04 07:28:38.043828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.283 [2024-11-04 07:28:38.043839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.283 [2024-11-04 07:28:38.047430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.283 [2024-11-04 07:28:38.047459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.283 [2024-11-04 07:28:38.047470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.283 [2024-11-04 07:28:38.050935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.283 [2024-11-04 07:28:38.050976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.283 [2024-11-04 07:28:38.050987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.283 [2024-11-04 07:28:38.054589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.283 [2024-11-04 07:28:38.054622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.283 [2024-11-04 07:28:38.054632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.283 [2024-11-04 07:28:38.058185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.283 [2024-11-04 07:28:38.058214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.283 [2024-11-04 07:28:38.058225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.283 [2024-11-04 07:28:38.062056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.283 [2024-11-04 07:28:38.062085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.283 [2024-11-04 07:28:38.062096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.283 [2024-11-04 07:28:38.065663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.283 [2024-11-04 07:28:38.065692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.283 [2024-11-04 07:28:38.065703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.283 [2024-11-04 07:28:38.070017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.283 [2024-11-04 07:28:38.070058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.283 [2024-11-04 07:28:38.070069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.283 [2024-11-04 07:28:38.073572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.283 [2024-11-04 07:28:38.073602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.283 [2024-11-04 07:28:38.073613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.283 [2024-11-04 07:28:38.077565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.283 [2024-11-04 07:28:38.077593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.283 [2024-11-04 07:28:38.077604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.283 [2024-11-04 07:28:38.081241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.283 [2024-11-04 07:28:38.081269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.283 [2024-11-04 07:28:38.081280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.283 [2024-11-04 07:28:38.084604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.283 [2024-11-04 07:28:38.084635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.283 [2024-11-04 07:28:38.084646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.283 [2024-11-04 07:28:38.088541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.283 [2024-11-04 07:28:38.088571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.283 [2024-11-04 07:28:38.088581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.283 [2024-11-04 07:28:38.092138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.283 [2024-11-04 07:28:38.092180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.283 [2024-11-04 07:28:38.092191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.283 [2024-11-04 07:28:38.096092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.283 [2024-11-04 07:28:38.096121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.283 [2024-11-04 07:28:38.096132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.283 [2024-11-04 07:28:38.100034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.283 [2024-11-04 07:28:38.100075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.283 [2024-11-04 07:28:38.100086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.284 [2024-11-04 07:28:38.103933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.284 [2024-11-04 07:28:38.103973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.284 [2024-11-04 07:28:38.103984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.284 [2024-11-04 07:28:38.107260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.284 [2024-11-04 07:28:38.107291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.284 [2024-11-04 07:28:38.107301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.284 [2024-11-04 07:28:38.110493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.284 [2024-11-04 07:28:38.110523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.284 [2024-11-04 07:28:38.110534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.284 [2024-11-04 07:28:38.114128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.284 [2024-11-04 07:28:38.114157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.284 [2024-11-04 07:28:38.114168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.284 [2024-11-04 07:28:38.118969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.284 [2024-11-04 07:28:38.118998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.284 [2024-11-04 07:28:38.119009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.544 [2024-11-04 07:28:38.122382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.544 [2024-11-04 07:28:38.122423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.544 [2024-11-04 07:28:38.122434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.544 [2024-11-04 07:28:38.126034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.544 [2024-11-04 07:28:38.126077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.544 [2024-11-04 07:28:38.126088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.544 [2024-11-04 07:28:38.129595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.544 [2024-11-04 07:28:38.129625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.544 [2024-11-04 07:28:38.129636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.544 [2024-11-04 07:28:38.133550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.544 [2024-11-04 07:28:38.133580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.544 [2024-11-04 07:28:38.133591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.544 [2024-11-04 07:28:38.137040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.544 [2024-11-04 07:28:38.137082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.544 [2024-11-04 07:28:38.137093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.544 [2024-11-04 07:28:38.141049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.544 [2024-11-04 07:28:38.141092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.544 [2024-11-04 07:28:38.141103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.544 [2024-11-04 07:28:38.144743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.544 [2024-11-04 07:28:38.144773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.544 [2024-11-04 07:28:38.144784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.544 [2024-11-04 07:28:38.147911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.544 [2024-11-04 07:28:38.147940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.545 [2024-11-04 07:28:38.147951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.545 [2024-11-04 07:28:38.151805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.545 [2024-11-04 07:28:38.151834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.545 [2024-11-04 07:28:38.151845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.545 [2024-11-04 07:28:38.154810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.545 [2024-11-04 07:28:38.154854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.545 [2024-11-04 07:28:38.154865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.545 [2024-11-04 07:28:38.158628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.545 [2024-11-04 07:28:38.158664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.545 [2024-11-04 07:28:38.158676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.545 [2024-11-04 07:28:38.162614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.545 [2024-11-04 07:28:38.162643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.545 [2024-11-04 07:28:38.162657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.545 [2024-11-04 07:28:38.165601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.545 [2024-11-04 07:28:38.165630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.545 [2024-11-04 07:28:38.165640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.545 [2024-11-04 07:28:38.169422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.545 [2024-11-04 07:28:38.169452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.545 [2024-11-04 07:28:38.169463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.545 [2024-11-04 07:28:38.173177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.545 [2024-11-04 07:28:38.173220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.545 [2024-11-04 07:28:38.173231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.545 [2024-11-04 07:28:38.177367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.545 [2024-11-04 07:28:38.177397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.545 [2024-11-04 07:28:38.177407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.545 [2024-11-04 07:28:38.180568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.545 [2024-11-04 07:28:38.180599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.545 [2024-11-04 07:28:38.180610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.545 [2024-11-04 07:28:38.184446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.545 [2024-11-04 07:28:38.184476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.545 [2024-11-04 07:28:38.184487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.545 [2024-11-04 07:28:38.187943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.545 [2024-11-04 07:28:38.187973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.545 [2024-11-04 07:28:38.187984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.545 [2024-11-04 07:28:38.191596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.545 [2024-11-04 07:28:38.191626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.545 [2024-11-04 07:28:38.191636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.545 [2024-11-04 07:28:38.194662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.545 [2024-11-04 07:28:38.194706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.545 [2024-11-04 07:28:38.194718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.545 [2024-11-04 07:28:38.198063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.545 [2024-11-04 07:28:38.198091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.545 [2024-11-04 07:28:38.198102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.545 [2024-11-04 07:28:38.201796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.545 [2024-11-04 07:28:38.201825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.545 [2024-11-04 07:28:38.201836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.545 [2024-11-04 07:28:38.205378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.545 [2024-11-04 07:28:38.205408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.545 [2024-11-04 07:28:38.205419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.545 [2024-11-04 07:28:38.208814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.545 [2024-11-04 07:28:38.208845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.545 [2024-11-04 07:28:38.208856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.545 [2024-11-04 07:28:38.212406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.545 [2024-11-04 07:28:38.212437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.545 [2024-11-04 07:28:38.212448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.545 [2024-11-04 07:28:38.216297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.545 [2024-11-04 07:28:38.216327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.545 [2024-11-04 07:28:38.216337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.545 [2024-11-04 07:28:38.219834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.545 [2024-11-04 07:28:38.219864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.545 [2024-11-04 07:28:38.219887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.545 [2024-11-04 07:28:38.223432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.545 [2024-11-04 07:28:38.223462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.545 [2024-11-04 07:28:38.223472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.545 [2024-11-04 07:28:38.226996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.545 [2024-11-04 07:28:38.227038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.545 [2024-11-04 07:28:38.227049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.545 [2024-11-04 07:28:38.231891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.545 [2024-11-04 07:28:38.231918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.545 [2024-11-04 07:28:38.231929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.545 [2024-11-04 07:28:38.237232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.545 [2024-11-04 07:28:38.237255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.545 [2024-11-04 07:28:38.237272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.545 [2024-11-04 07:28:38.242024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.545 [2024-11-04 07:28:38.242066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.545 [2024-11-04 07:28:38.242076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.545 [2024-11-04 07:28:38.245856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.545 [2024-11-04 07:28:38.245897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.545 [2024-11-04 07:28:38.245908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.545 [2024-11-04 07:28:38.250074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.545 [2024-11-04 07:28:38.250102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.546 [2024-11-04 07:28:38.250112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.546 [2024-11-04 07:28:38.254174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.546 [2024-11-04 07:28:38.254202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.546 [2024-11-04 07:28:38.254212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.546 [2024-11-04 07:28:38.257532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.546 [2024-11-04 07:28:38.257562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.546 [2024-11-04 07:28:38.257572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.546 [2024-11-04 07:28:38.261012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.546 [2024-11-04 07:28:38.261042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.546 [2024-11-04 07:28:38.261054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.546 [2024-11-04 07:28:38.264400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.546 [2024-11-04 07:28:38.264430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.546 [2024-11-04 07:28:38.264441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.546 [2024-11-04 07:28:38.267935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.546 [2024-11-04 07:28:38.267965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.546 [2024-11-04 07:28:38.267976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.546 [2024-11-04 07:28:38.271485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.546 [2024-11-04 07:28:38.271515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.546 [2024-11-04 07:28:38.271526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.546 [2024-11-04 07:28:38.275086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.546 [2024-11-04 07:28:38.275116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.546 [2024-11-04 07:28:38.275127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.546 [2024-11-04 07:28:38.278534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.546 [2024-11-04 07:28:38.278570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.546 [2024-11-04 07:28:38.278589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.546 [2024-11-04 07:28:38.281724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.546 [2024-11-04 07:28:38.281753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.546 [2024-11-04 07:28:38.281763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.546 [2024-11-04 07:28:38.285402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.546 [2024-11-04 07:28:38.285432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.546 [2024-11-04 07:28:38.285443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.546 [2024-11-04 07:28:38.289292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.546 [2024-11-04 07:28:38.289321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.546 [2024-11-04 07:28:38.289332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.546 [2024-11-04 07:28:38.292965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.546 [2024-11-04 07:28:38.292994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.546 [2024-11-04 07:28:38.293005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.546 [2024-11-04 07:28:38.296486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.546 [2024-11-04 07:28:38.296516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.546 [2024-11-04 07:28:38.296527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.546 [2024-11-04 07:28:38.299996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.546 [2024-11-04 07:28:38.300025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.546 [2024-11-04 07:28:38.300035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.546 [2024-11-04 07:28:38.303551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.546 [2024-11-04 07:28:38.303580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.546 [2024-11-04 07:28:38.303591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.546 [2024-11-04 07:28:38.307189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.546 [2024-11-04 07:28:38.307218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.546 [2024-11-04 07:28:38.307230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.546 [2024-11-04 07:28:38.310614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.546 [2024-11-04 07:28:38.310644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.546 [2024-11-04 07:28:38.310655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.546 [2024-11-04 07:28:38.314217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.546 [2024-11-04 07:28:38.314257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.546 [2024-11-04 07:28:38.314268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.546 [2024-11-04 07:28:38.318208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.546 [2024-11-04 07:28:38.318236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.546 [2024-11-04 07:28:38.318247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.546 [2024-11-04 07:28:38.321940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.546 [2024-11-04 07:28:38.321979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.546 [2024-11-04 07:28:38.321990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.546 [2024-11-04 07:28:38.325601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.546 [2024-11-04 07:28:38.325631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.546 [2024-11-04 07:28:38.325643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.546 [2024-11-04 07:28:38.329675] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.546 [2024-11-04 07:28:38.329704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.546 [2024-11-04 07:28:38.329715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.546 [2024-11-04 07:28:38.332982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.546 [2024-11-04 07:28:38.333012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.546 [2024-11-04 07:28:38.333023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.546 [2024-11-04 07:28:38.336720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.546 [2024-11-04 07:28:38.336750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.546 [2024-11-04 07:28:38.336761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.546 [2024-11-04 07:28:38.340463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.546 [2024-11-04 07:28:38.340494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.546 [2024-11-04 07:28:38.340504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.546 [2024-11-04 07:28:38.344076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.546 [2024-11-04 07:28:38.344105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.546 [2024-11-04 07:28:38.344116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.546 [2024-11-04 07:28:38.347422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.546 [2024-11-04 07:28:38.347453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.546 [2024-11-04 07:28:38.347463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.546 [2024-11-04 07:28:38.351091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.547 [2024-11-04 07:28:38.351121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.547 [2024-11-04 07:28:38.351132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.547 [2024-11-04 07:28:38.354989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.547 [2024-11-04 07:28:38.355020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.547 [2024-11-04 07:28:38.355031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.547 [2024-11-04 07:28:38.358786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.547 [2024-11-04 07:28:38.358817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.547 [2024-11-04 07:28:38.358828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.547 [2024-11-04 07:28:38.362383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.547 [2024-11-04 07:28:38.362413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.547 [2024-11-04 07:28:38.362423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.547 [2024-11-04 07:28:38.365911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.547 [2024-11-04 07:28:38.365940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.547 [2024-11-04 07:28:38.365951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.547 [2024-11-04 07:28:38.368947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.547 [2024-11-04 07:28:38.368977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.547 [2024-11-04 07:28:38.368987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.547 [2024-11-04 07:28:38.372534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.547 [2024-11-04 07:28:38.372564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.547 [2024-11-04 07:28:38.372574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.547 [2024-11-04 07:28:38.375786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.547 [2024-11-04 07:28:38.375815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.547 [2024-11-04 07:28:38.375826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.547 [2024-11-04 07:28:38.379679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.547 [2024-11-04 07:28:38.379708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.547 [2024-11-04 07:28:38.379719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.807 [2024-11-04 07:28:38.383722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.807 [2024-11-04 07:28:38.383750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.807 [2024-11-04 07:28:38.383761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.807 [2024-11-04 07:28:38.386817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.807 [2024-11-04 07:28:38.386848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.807 [2024-11-04 07:28:38.386859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.807 [2024-11-04 07:28:38.390539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.807 [2024-11-04 07:28:38.390576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.807 [2024-11-04 07:28:38.390595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.807 [2024-11-04 07:28:38.393531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.807 [2024-11-04 07:28:38.393561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.807 [2024-11-04 07:28:38.393572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.807 [2024-11-04 07:28:38.397210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.807 [2024-11-04 07:28:38.397252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.808 [2024-11-04 07:28:38.397264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.808 [2024-11-04 07:28:38.401109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.808 [2024-11-04 07:28:38.401138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.808 [2024-11-04 07:28:38.401149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.808 [2024-11-04 07:28:38.404990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.808 [2024-11-04 07:28:38.405031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.808 [2024-11-04 07:28:38.405042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.808 [2024-11-04 07:28:38.408615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.808 [2024-11-04 07:28:38.408645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.808 [2024-11-04 07:28:38.408656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.808 [2024-11-04 07:28:38.412121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.808 [2024-11-04 07:28:38.412151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.808 [2024-11-04 07:28:38.412162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.808 [2024-11-04 07:28:38.415457] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.808 [2024-11-04 07:28:38.415500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.808 [2024-11-04 07:28:38.415511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.808 [2024-11-04 07:28:38.418947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.808 [2024-11-04 07:28:38.418988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.808 [2024-11-04 07:28:38.419010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.808 [2024-11-04 07:28:38.422846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.808 [2024-11-04 07:28:38.422889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.808 [2024-11-04 07:28:38.422908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.808 [2024-11-04 07:28:38.426290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.808 [2024-11-04 07:28:38.426320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.808 [2024-11-04 07:28:38.426331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.808 [2024-11-04 07:28:38.430081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.808 [2024-11-04 07:28:38.430123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.808 [2024-11-04 07:28:38.430134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.808 [2024-11-04 07:28:38.433572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.808 [2024-11-04 07:28:38.433603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.808 [2024-11-04 07:28:38.433614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.808 [2024-11-04 07:28:38.437315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.808 [2024-11-04 07:28:38.437344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.808 [2024-11-04 07:28:38.437355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.808 [2024-11-04 07:28:38.440723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.808 [2024-11-04 07:28:38.440753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.808 [2024-11-04 07:28:38.440764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.808 [2024-11-04 07:28:38.444621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.808 [2024-11-04 07:28:38.444652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.808 [2024-11-04 07:28:38.444663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.808 [2024-11-04 07:28:38.448198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.808 [2024-11-04 07:28:38.448229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.808 [2024-11-04 07:28:38.448240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.808 [2024-11-04 07:28:38.452278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.808 [2024-11-04 07:28:38.452308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.808 [2024-11-04 07:28:38.452319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.808 [2024-11-04 07:28:38.455738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.808 [2024-11-04 07:28:38.455769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.808 [2024-11-04 07:28:38.455780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.808 [2024-11-04 07:28:38.459043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.808 [2024-11-04 07:28:38.459086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.808 [2024-11-04 07:28:38.459097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.808 [2024-11-04 07:28:38.462747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.808 [2024-11-04 07:28:38.462778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.808 [2024-11-04 07:28:38.462789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.808 [2024-11-04 07:28:38.466813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.808 [2024-11-04 07:28:38.466843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.808 [2024-11-04 07:28:38.466854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.808 [2024-11-04 07:28:38.470399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.808 [2024-11-04 07:28:38.470429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.808 [2024-11-04 07:28:38.470440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.808 [2024-11-04 07:28:38.473690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.808 [2024-11-04 07:28:38.473720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.808 [2024-11-04 07:28:38.473731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.808 [2024-11-04 07:28:38.477746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.808 [2024-11-04 07:28:38.477777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.808 [2024-11-04 07:28:38.477790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.808 [2024-11-04 07:28:38.481034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.808 [2024-11-04 07:28:38.481076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.808 [2024-11-04 07:28:38.481088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.808 [2024-11-04 07:28:38.484923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.808 [2024-11-04 07:28:38.484964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.808 [2024-11-04 07:28:38.484975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.808 [2024-11-04 07:28:38.488341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.808 [2024-11-04 07:28:38.488372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.808 [2024-11-04 07:28:38.488382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.808 [2024-11-04 07:28:38.492071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.808 [2024-11-04 07:28:38.492101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.808 [2024-11-04 07:28:38.492112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.808 [2024-11-04 07:28:38.495905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.808 [2024-11-04 07:28:38.495935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.808 [2024-11-04 07:28:38.495946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.809 [2024-11-04 07:28:38.499059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.809 [2024-11-04 07:28:38.499088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.809 [2024-11-04 07:28:38.499099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.809 [2024-11-04 07:28:38.502264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.809 [2024-11-04 07:28:38.502294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.809 [2024-11-04 07:28:38.502305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.809 [2024-11-04 07:28:38.505989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.809 [2024-11-04 07:28:38.506029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.809 [2024-11-04 07:28:38.506040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.809 [2024-11-04 07:28:38.510134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.809 [2024-11-04 07:28:38.510175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.809 [2024-11-04 07:28:38.510186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.809 [2024-11-04 07:28:38.514958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.809 [2024-11-04 07:28:38.514997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.809 [2024-11-04 07:28:38.515020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.809 [2024-11-04 07:28:38.518681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.809 [2024-11-04 07:28:38.518712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.809 [2024-11-04 07:28:38.518723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.809 [2024-11-04 07:28:38.522958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.809 [2024-11-04 07:28:38.522988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.809 [2024-11-04 07:28:38.523000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.809 [2024-11-04 07:28:38.526171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.809 [2024-11-04 07:28:38.526213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.809 [2024-11-04 07:28:38.526223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.809 [2024-11-04 07:28:38.529636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.809 [2024-11-04 07:28:38.529668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.809 [2024-11-04 07:28:38.529679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.809 [2024-11-04 07:28:38.533793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.809 [2024-11-04 07:28:38.533835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.809 [2024-11-04 07:28:38.533846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.809 [2024-11-04 07:28:38.537867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.809 [2024-11-04 07:28:38.537909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.809 [2024-11-04 07:28:38.537924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.809 [2024-11-04 07:28:38.541199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.809 [2024-11-04 07:28:38.541241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.809 [2024-11-04 07:28:38.541252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.809 [2024-11-04 07:28:38.545105] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.809 [2024-11-04 07:28:38.545134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.809 [2024-11-04 07:28:38.545145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.809 [2024-11-04 07:28:38.548949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.809 [2024-11-04 07:28:38.548992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.809 [2024-11-04 07:28:38.549003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.809 [2024-11-04 07:28:38.552172] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.809 [2024-11-04 07:28:38.552215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.809 [2024-11-04 07:28:38.552225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.809 [2024-11-04 07:28:38.555658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.809 [2024-11-04 07:28:38.555700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.809 [2024-11-04 07:28:38.555710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.809 [2024-11-04 07:28:38.559804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.809 [2024-11-04 07:28:38.559846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.809 [2024-11-04 07:28:38.559856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.809 [2024-11-04 07:28:38.563066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.809 [2024-11-04 07:28:38.563107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.809 [2024-11-04 07:28:38.563118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.809 [2024-11-04 07:28:38.567040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.809 [2024-11-04 07:28:38.567082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.809 [2024-11-04 07:28:38.567093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.809 [2024-11-04 07:28:38.570483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.809 [2024-11-04 07:28:38.570524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.809 [2024-11-04 07:28:38.570535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.809 [2024-11-04 07:28:38.574040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.809 [2024-11-04 07:28:38.574069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.809 [2024-11-04 07:28:38.574079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.809 [2024-11-04 07:28:38.577752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.809 [2024-11-04 07:28:38.577782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.809 [2024-11-04 07:28:38.577793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.809 [2024-11-04 07:28:38.581417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.809 [2024-11-04 07:28:38.581447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.809 [2024-11-04 07:28:38.581458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.809 [2024-11-04 07:28:38.585503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.809 [2024-11-04 07:28:38.585532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.809 [2024-11-04 07:28:38.585543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.809 [2024-11-04 07:28:38.588989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.809 [2024-11-04 07:28:38.589018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.809 [2024-11-04 07:28:38.589028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.809 [2024-11-04 07:28:38.592320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.809 [2024-11-04 07:28:38.592350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.809 [2024-11-04 07:28:38.592362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.809 [2024-11-04 07:28:38.595915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.809 [2024-11-04 07:28:38.595955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.809 [2024-11-04 07:28:38.595966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.809 [2024-11-04 07:28:38.599461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.810 [2024-11-04 07:28:38.599503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.810 [2024-11-04 07:28:38.599514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.810 [2024-11-04 07:28:38.602725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.810 [2024-11-04 07:28:38.602767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.810 [2024-11-04 07:28:38.602778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.810 [2024-11-04 07:28:38.606479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.810 [2024-11-04 07:28:38.606521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.810 [2024-11-04 07:28:38.606532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.810 [2024-11-04 07:28:38.610265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.810 [2024-11-04 07:28:38.610306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.810 [2024-11-04 07:28:38.610317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.810 [2024-11-04 07:28:38.613742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.810 [2024-11-04 07:28:38.613772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.810 [2024-11-04 07:28:38.613783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.810 [2024-11-04 07:28:38.617274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.810 [2024-11-04 07:28:38.617304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.810 [2024-11-04 07:28:38.617315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.810 [2024-11-04 07:28:38.621039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.810 [2024-11-04 07:28:38.621079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.810 [2024-11-04 07:28:38.621089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.810 [2024-11-04 07:28:38.625270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.810 [2024-11-04 07:28:38.625315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.810 [2024-11-04 07:28:38.625326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.810 [2024-11-04 07:28:38.629025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.810 [2024-11-04 07:28:38.629068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.810 [2024-11-04 07:28:38.629079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.810 [2024-11-04 07:28:38.632599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.810 [2024-11-04 07:28:38.632642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.810 [2024-11-04 07:28:38.632653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.810 [2024-11-04 07:28:38.635906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.810 [2024-11-04 07:28:38.635934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.810 [2024-11-04 07:28:38.635947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.810 [2024-11-04 07:28:38.640231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.810 [2024-11-04 07:28:38.640261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.810 [2024-11-04 07:28:38.640272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.810 [2024-11-04 07:28:38.644398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:36.810 [2024-11-04 07:28:38.644440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.810 [2024-11-04 07:28:38.644451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.070 [2024-11-04 07:28:38.647733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.071 [2024-11-04 07:28:38.647762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.071 [2024-11-04 07:28:38.647773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.071 [2024-11-04 07:28:38.651764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.071 [2024-11-04 07:28:38.651794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.071 [2024-11-04 07:28:38.651805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.071 [2024-11-04 07:28:38.655521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.071 [2024-11-04 07:28:38.655563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.071 [2024-11-04 07:28:38.655573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.071 [2024-11-04 07:28:38.660107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.071 [2024-11-04 07:28:38.660149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.071 [2024-11-04 07:28:38.660161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.071 [2024-11-04 07:28:38.664827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.071 [2024-11-04 07:28:38.664868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.071 [2024-11-04 07:28:38.664901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.071 [2024-11-04 07:28:38.668257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.071 [2024-11-04 07:28:38.668287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.071 [2024-11-04 07:28:38.668304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.071 [2024-11-04 07:28:38.671818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.071 [2024-11-04 07:28:38.671849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.071 [2024-11-04 07:28:38.671859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.071 [2024-11-04 07:28:38.676239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.071 [2024-11-04 07:28:38.676270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.071 [2024-11-04 07:28:38.676292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.071 [2024-11-04 07:28:38.679970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.071 [2024-11-04 07:28:38.680011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.071 [2024-11-04 07:28:38.680022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.071 [2024-11-04 07:28:38.684034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.071 [2024-11-04 07:28:38.684064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.071 [2024-11-04 07:28:38.684074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.071 [2024-11-04 07:28:38.687421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.071 [2024-11-04 07:28:38.687449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.071 [2024-11-04 07:28:38.687460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.071 [2024-11-04 07:28:38.691789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.071 [2024-11-04 07:28:38.691818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.071 [2024-11-04 07:28:38.691829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.071 [2024-11-04 07:28:38.695616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.071 [2024-11-04 07:28:38.695657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.071 [2024-11-04 07:28:38.695668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.071 [2024-11-04 07:28:38.700281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.071 [2024-11-04 07:28:38.700311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.071 [2024-11-04 07:28:38.700322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.071 [2024-11-04 07:28:38.703746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.071 [2024-11-04 07:28:38.703788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.071 [2024-11-04 07:28:38.703799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.071 [2024-11-04 07:28:38.707263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.071 [2024-11-04 07:28:38.707306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.071 [2024-11-04 07:28:38.707316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.071 [2024-11-04 07:28:38.711289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.071 [2024-11-04 07:28:38.711318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.071 [2024-11-04 07:28:38.711330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.071 [2024-11-04 07:28:38.714752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.071 [2024-11-04 07:28:38.714783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.071 [2024-11-04 07:28:38.714794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.071 [2024-11-04 07:28:38.718801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.071 [2024-11-04 07:28:38.718831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.071 [2024-11-04 07:28:38.718844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.071 [2024-11-04 07:28:38.721682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.071 [2024-11-04 07:28:38.721710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.071 [2024-11-04 07:28:38.721720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.071 [2024-11-04 07:28:38.725807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.071 [2024-11-04 07:28:38.725837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.071 [2024-11-04 07:28:38.725847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.071 [2024-11-04 07:28:38.729143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.071 [2024-11-04 07:28:38.729186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.071 [2024-11-04 07:28:38.729197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.071 [2024-11-04 07:28:38.732939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.071 [2024-11-04 07:28:38.732980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.071 [2024-11-04 07:28:38.732990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.071 [2024-11-04 07:28:38.736717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.071 [2024-11-04 07:28:38.736748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.071 [2024-11-04 07:28:38.736758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.071 [2024-11-04 07:28:38.740299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.071 [2024-11-04 07:28:38.740329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.071 [2024-11-04 07:28:38.740340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.071 [2024-11-04 07:28:38.744376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.071 [2024-11-04 07:28:38.744406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.071 [2024-11-04 07:28:38.744417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.071 [2024-11-04 07:28:38.748052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.071 [2024-11-04 07:28:38.748081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.071 [2024-11-04 07:28:38.748092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.072 [2024-11-04 07:28:38.751220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.072 [2024-11-04 07:28:38.751250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.072 [2024-11-04 07:28:38.751261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.072 [2024-11-04 07:28:38.754857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.072 [2024-11-04 07:28:38.754896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.072 [2024-11-04 07:28:38.754919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.072 [2024-11-04 07:28:38.758912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.072 [2024-11-04 07:28:38.758955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.072 [2024-11-04 07:28:38.758965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.072 [2024-11-04 07:28:38.762780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.072 [2024-11-04 07:28:38.762810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.072 [2024-11-04 07:28:38.762820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.072 [2024-11-04 07:28:38.766773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.072 [2024-11-04 07:28:38.766802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.072 [2024-11-04 07:28:38.766812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.072 [2024-11-04 07:28:38.769365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.072 [2024-11-04 07:28:38.769393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.072 [2024-11-04 07:28:38.769404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.072 [2024-11-04 07:28:38.773528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.072 [2024-11-04 07:28:38.773558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.072 [2024-11-04 07:28:38.773568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.072 [2024-11-04 07:28:38.776491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.072 [2024-11-04 07:28:38.776520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.072 [2024-11-04 07:28:38.776531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.072 [2024-11-04 07:28:38.780564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.072 [2024-11-04 07:28:38.780594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.072 [2024-11-04 07:28:38.780604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.072 [2024-11-04 07:28:38.783630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.072 [2024-11-04 07:28:38.783660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.072 [2024-11-04 07:28:38.783671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.072 [2024-11-04 07:28:38.787561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.072 [2024-11-04 07:28:38.787589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.072 [2024-11-04 07:28:38.787599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.072 [2024-11-04 07:28:38.791524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.072 [2024-11-04 07:28:38.791553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.072 [2024-11-04 07:28:38.791564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.072 [2024-11-04 07:28:38.795151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.072 [2024-11-04 07:28:38.795181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.072 [2024-11-04 07:28:38.795192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.072 [2024-11-04 07:28:38.798827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.072 [2024-11-04 07:28:38.798856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.072 [2024-11-04 07:28:38.798866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.072 [2024-11-04 07:28:38.802618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.072 [2024-11-04 07:28:38.802648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.072 [2024-11-04 07:28:38.802658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.072 [2024-11-04 07:28:38.806099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.072 [2024-11-04 07:28:38.806128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.072 [2024-11-04 07:28:38.806139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.072 [2024-11-04 07:28:38.809194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.072 [2024-11-04 07:28:38.809223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.072 [2024-11-04 07:28:38.809234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.072 [2024-11-04 07:28:38.812909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.072 [2024-11-04 07:28:38.812937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.072 [2024-11-04 07:28:38.812947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.072 [2024-11-04 07:28:38.816782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.072 [2024-11-04 07:28:38.816811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.072 [2024-11-04 07:28:38.816821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.072 [2024-11-04 07:28:38.820896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.072 [2024-11-04 07:28:38.820924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.072 [2024-11-04 07:28:38.820934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.072 [2024-11-04 07:28:38.824614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.072 [2024-11-04 07:28:38.824643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.072 [2024-11-04 07:28:38.824654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.072 [2024-11-04 07:28:38.828518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.072 [2024-11-04 07:28:38.828548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.072 [2024-11-04 07:28:38.828559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.072 [2024-11-04 07:28:38.831947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.072 [2024-11-04 07:28:38.831988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.072 [2024-11-04 07:28:38.831999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.072 [2024-11-04 07:28:38.835831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.072 [2024-11-04 07:28:38.835861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.072 [2024-11-04 07:28:38.835893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.072 [2024-11-04 07:28:38.839106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.072 [2024-11-04 07:28:38.839135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.072 [2024-11-04 07:28:38.839146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.072 [2024-11-04 07:28:38.842176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.072 [2024-11-04 07:28:38.842204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.072 [2024-11-04 07:28:38.842214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.072 [2024-11-04 07:28:38.845868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.072 [2024-11-04 07:28:38.845909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.072 [2024-11-04 07:28:38.845920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.072 [2024-11-04 07:28:38.849223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.073 [2024-11-04 07:28:38.849252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.073 [2024-11-04 07:28:38.849262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.073 [2024-11-04 07:28:38.853205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.073 [2024-11-04 07:28:38.853233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.073 [2024-11-04 07:28:38.853244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.073 [2024-11-04 07:28:38.856486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.073 [2024-11-04 07:28:38.856515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.073 [2024-11-04 07:28:38.856526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.073 [2024-11-04 07:28:38.860610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.073 [2024-11-04 07:28:38.860638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.073 [2024-11-04 07:28:38.860649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.073 [2024-11-04 07:28:38.864557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.073 [2024-11-04 07:28:38.864586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.073 [2024-11-04 07:28:38.864597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.073 [2024-11-04 07:28:38.868099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.073 [2024-11-04 07:28:38.868127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.073 [2024-11-04 07:28:38.868137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.073 [2024-11-04 07:28:38.871816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.073 [2024-11-04 07:28:38.871845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.073 [2024-11-04 07:28:38.871857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.073 [2024-11-04 07:28:38.874476] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.073 [2024-11-04 07:28:38.874504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.073 [2024-11-04 07:28:38.874514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.073 [2024-11-04 07:28:38.878698] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.073 [2024-11-04 07:28:38.878728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.073 [2024-11-04 07:28:38.878739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.073 [2024-11-04 07:28:38.881585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.073 [2024-11-04 07:28:38.881614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.073 [2024-11-04 07:28:38.881625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.073 [2024-11-04 07:28:38.885417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.073 [2024-11-04 07:28:38.885446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.073 [2024-11-04 07:28:38.885457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.073 [2024-11-04 07:28:38.888913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.073 [2024-11-04 07:28:38.888941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.073 [2024-11-04 07:28:38.888952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.073 [2024-11-04 07:28:38.892249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.073 [2024-11-04 07:28:38.892279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.073 [2024-11-04 07:28:38.892290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.073 [2024-11-04 07:28:38.896152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.073 [2024-11-04 07:28:38.896183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.073 [2024-11-04 07:28:38.896193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.073 [2024-11-04 07:28:38.900191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.073 [2024-11-04 07:28:38.900221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.073 [2024-11-04 07:28:38.900232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.073 [2024-11-04 07:28:38.903906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.073 [2024-11-04 07:28:38.903946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.073 [2024-11-04 07:28:38.903957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.073 [2024-11-04 07:28:38.907365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.073 [2024-11-04 07:28:38.907394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.073 [2024-11-04 07:28:38.907410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.333 [2024-11-04 07:28:38.911476] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.333 [2024-11-04 07:28:38.911505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.333 [2024-11-04 07:28:38.911516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.333 [2024-11-04 07:28:38.914694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.333 [2024-11-04 07:28:38.914723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.333 [2024-11-04 07:28:38.914734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.333 [2024-11-04 07:28:38.918257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.333 [2024-11-04 07:28:38.918287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.333 [2024-11-04 07:28:38.918297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.333 [2024-11-04 07:28:38.922345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.333 [2024-11-04 07:28:38.922375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.333 [2024-11-04 07:28:38.922386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.333 [2024-11-04 07:28:38.925557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.333 [2024-11-04 07:28:38.925585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.333 [2024-11-04 07:28:38.925596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.333 [2024-11-04 07:28:38.928750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.333 [2024-11-04 07:28:38.928780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.333 [2024-11-04 07:28:38.928790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.333 [2024-11-04 07:28:38.932893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.333 [2024-11-04 07:28:38.932933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.333 [2024-11-04 07:28:38.932944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.333 [2024-11-04 07:28:38.936568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.333 [2024-11-04 07:28:38.936598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.333 [2024-11-04 07:28:38.936608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.333 [2024-11-04 07:28:38.939681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.333 [2024-11-04 07:28:38.939709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.333 [2024-11-04 07:28:38.939721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.333 [2024-11-04 07:28:38.943041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.333 [2024-11-04 07:28:38.943070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.333 [2024-11-04 07:28:38.943080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.333 [2024-11-04 07:28:38.946920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.333 [2024-11-04 07:28:38.946949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.333 [2024-11-04 07:28:38.946959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.333 [2024-11-04 07:28:38.950496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.333 [2024-11-04 07:28:38.950524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.333 [2024-11-04 07:28:38.950534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.333 [2024-11-04 07:28:38.954813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.333 [2024-11-04 07:28:38.954842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.333 [2024-11-04 07:28:38.954856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.333 [2024-11-04 07:28:38.958301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.333 [2024-11-04 07:28:38.958331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.333 [2024-11-04 07:28:38.958342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.333 [2024-11-04 07:28:38.962292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.333 [2024-11-04 07:28:38.962321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.333 [2024-11-04 07:28:38.962332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.333 [2024-11-04 07:28:38.966199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.333 [2024-11-04 07:28:38.966228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.333 [2024-11-04 07:28:38.966239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.333 [2024-11-04 07:28:38.969727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.333 [2024-11-04 07:28:38.969757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.333 [2024-11-04 07:28:38.969768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.333 [2024-11-04 07:28:38.973170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.333 [2024-11-04 07:28:38.973211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.333 [2024-11-04 07:28:38.973222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.333 [2024-11-04 07:28:38.976567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.333 [2024-11-04 07:28:38.976597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.333 [2024-11-04 07:28:38.976607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.333 [2024-11-04 07:28:38.980067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.333 [2024-11-04 07:28:38.980096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.333 [2024-11-04 07:28:38.980107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.333 [2024-11-04 07:28:38.983901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.333 [2024-11-04 07:28:38.983930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.333 [2024-11-04 07:28:38.983941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.333 [2024-11-04 07:28:38.987370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.333 [2024-11-04 07:28:38.987399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.333 [2024-11-04 07:28:38.987410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.333 [2024-11-04 07:28:38.991357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.334 [2024-11-04 07:28:38.991387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.334 [2024-11-04 07:28:38.991398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.334 [2024-11-04 07:28:38.994543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.334 [2024-11-04 07:28:38.994586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.334 [2024-11-04 07:28:38.994607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.334 [2024-11-04 07:28:38.998495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.334 [2024-11-04 07:28:38.998524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.334 [2024-11-04 07:28:38.998535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.334 [2024-11-04 07:28:39.002159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.334 [2024-11-04 07:28:39.002187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.334 [2024-11-04 07:28:39.002199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.334 [2024-11-04 07:28:39.005408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.334 [2024-11-04 07:28:39.005438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.334 [2024-11-04 07:28:39.005449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.334 [2024-11-04 07:28:39.009366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.334 [2024-11-04 07:28:39.009397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.334 [2024-11-04 07:28:39.009408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.334 [2024-11-04 07:28:39.012419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.334 [2024-11-04 07:28:39.012449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.334 [2024-11-04 07:28:39.012460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.334 [2024-11-04 07:28:39.016424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.334 [2024-11-04 07:28:39.016453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.334 [2024-11-04 07:28:39.016464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.334 [2024-11-04 07:28:39.019256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.334 [2024-11-04 07:28:39.019285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.334 [2024-11-04 07:28:39.019295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.334 [2024-11-04 07:28:39.022969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ccd10) 00:22:37.334 [2024-11-04 07:28:39.022997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.334 [2024-11-04 07:28:39.023008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.334 00:22:37.334 Latency(us) 00:22:37.334 [2024-11-04T07:28:39.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:37.334 [2024-11-04T07:28:39.175Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:37.334 nvme0n1 : 2.00 8333.75 1041.72 0.00 0.00 1916.94 688.87 7804.74 00:22:37.334 [2024-11-04T07:28:39.175Z] =================================================================================================================== 00:22:37.334 [2024-11-04T07:28:39.175Z] Total : 8333.75 1041.72 0.00 0.00 1916.94 688.87 7804.74 00:22:37.334 0 00:22:37.334 07:28:39 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:37.334 07:28:39 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:37.334 07:28:39 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:37.334 07:28:39 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:37.334 | .driver_specific 00:22:37.334 | .nvme_error 00:22:37.334 | .status_code 00:22:37.334 | .command_transient_transport_error' 00:22:37.592 07:28:39 -- host/digest.sh@71 -- # (( 538 > 0 )) 00:22:37.592 07:28:39 -- host/digest.sh@73 -- # killprocess 97471 00:22:37.592 07:28:39 -- common/autotest_common.sh@926 -- # '[' -z 97471 ']' 00:22:37.592 07:28:39 -- common/autotest_common.sh@930 -- # kill -0 97471 00:22:37.592 07:28:39 -- common/autotest_common.sh@931 -- # uname 00:22:37.592 07:28:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:37.592 07:28:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97471 00:22:37.593 07:28:39 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:37.593 07:28:39 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:37.593 killing process with pid 97471 00:22:37.593 07:28:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97471' 00:22:37.593 07:28:39 -- common/autotest_common.sh@945 -- # kill 97471 00:22:37.593 Received shutdown signal, test time was about 2.000000 seconds 00:22:37.593 00:22:37.593 Latency(us) 00:22:37.593 [2024-11-04T07:28:39.434Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:37.593 [2024-11-04T07:28:39.434Z] =================================================================================================================== 00:22:37.593 [2024-11-04T07:28:39.434Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:37.593 07:28:39 -- common/autotest_common.sh@950 -- # wait 97471 00:22:37.852 07:28:39 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:22:37.852 07:28:39 -- host/digest.sh@54 -- # local rw bs qd 00:22:37.852 07:28:39 -- host/digest.sh@56 -- # rw=randwrite 00:22:37.852 07:28:39 -- host/digest.sh@56 -- # bs=4096 00:22:37.852 07:28:39 -- host/digest.sh@56 -- # qd=128 00:22:37.852 07:28:39 -- host/digest.sh@58 -- # bperfpid=97557 00:22:37.852 07:28:39 -- host/digest.sh@60 -- # waitforlisten 97557 /var/tmp/bperf.sock 00:22:37.852 07:28:39 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:22:37.852 07:28:39 -- common/autotest_common.sh@819 -- # '[' -z 97557 ']' 00:22:37.852 07:28:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:37.852 07:28:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:37.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:37.852 07:28:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:37.852 07:28:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:37.852 07:28:39 -- common/autotest_common.sh@10 -- # set +x 00:22:37.852 [2024-11-04 07:28:39.622232] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:22:37.852 [2024-11-04 07:28:39.622328] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97557 ] 00:22:38.109 [2024-11-04 07:28:39.757595] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.109 [2024-11-04 07:28:39.813646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:39.077 07:28:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:39.077 07:28:40 -- common/autotest_common.sh@852 -- # return 0 00:22:39.077 07:28:40 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:39.077 07:28:40 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:39.077 07:28:40 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:39.077 07:28:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:39.077 07:28:40 -- common/autotest_common.sh@10 -- # set +x 00:22:39.077 07:28:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:39.077 07:28:40 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:39.077 07:28:40 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:39.644 nvme0n1 00:22:39.644 07:28:41 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:39.644 07:28:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:39.644 07:28:41 -- common/autotest_common.sh@10 -- # set +x 00:22:39.644 07:28:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:39.644 07:28:41 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:39.644 07:28:41 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:39.644 Running I/O for 2 seconds... 00:22:39.644 [2024-11-04 07:28:41.347201] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190eea00 00:22:39.644 [2024-11-04 07:28:41.347928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.644 [2024-11-04 07:28:41.347971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.644 [2024-11-04 07:28:41.355786] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e5ec8 00:22:39.644 [2024-11-04 07:28:41.356709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.644 [2024-11-04 07:28:41.356752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:39.644 [2024-11-04 07:28:41.366654] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190df550 00:22:39.644 [2024-11-04 07:28:41.367368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.644 [2024-11-04 07:28:41.367399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.644 [2024-11-04 07:28:41.376016] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190ebfd0 00:22:39.644 [2024-11-04 07:28:41.376695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.644 [2024-11-04 07:28:41.376726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:39.644 [2024-11-04 07:28:41.385359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e6fa8 00:22:39.644 [2024-11-04 07:28:41.386038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.644 [2024-11-04 07:28:41.386066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:39.644 [2024-11-04 07:28:41.394790] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190eb328 00:22:39.644 [2024-11-04 07:28:41.395822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.644 [2024-11-04 07:28:41.395851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:39.644 [2024-11-04 07:28:41.404504] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e23b8 00:22:39.644 [2024-11-04 07:28:41.405830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.644 [2024-11-04 07:28:41.405871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:39.644 [2024-11-04 07:28:41.413564] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190ea680 00:22:39.644 [2024-11-04 07:28:41.414509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.644 [2024-11-04 07:28:41.414537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:39.644 [2024-11-04 07:28:41.423482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e6fa8 00:22:39.644 [2024-11-04 07:28:41.423906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.644 [2024-11-04 07:28:41.423931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:39.644 [2024-11-04 07:28:41.432682] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f1430 00:22:39.644 [2024-11-04 07:28:41.433225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.644 [2024-11-04 07:28:41.433248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:39.644 [2024-11-04 07:28:41.442002] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e5658 00:22:39.644 [2024-11-04 07:28:41.442507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.644 [2024-11-04 07:28:41.442530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:39.644 [2024-11-04 07:28:41.451235] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f20d8 00:22:39.644 [2024-11-04 07:28:41.451673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.644 [2024-11-04 07:28:41.451709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:39.644 [2024-11-04 07:28:41.460304] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e49b0 00:22:39.645 [2024-11-04 07:28:41.460807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.645 [2024-11-04 07:28:41.460829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:39.645 [2024-11-04 07:28:41.471661] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f2d80 00:22:39.645 [2024-11-04 07:28:41.473169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.645 [2024-11-04 07:28:41.473197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.645 [2024-11-04 07:28:41.481130] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190fb8b8 00:22:39.645 [2024-11-04 07:28:41.482718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.645 [2024-11-04 07:28:41.482746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.904 [2024-11-04 07:28:41.491289] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f20d8 00:22:39.904 [2024-11-04 07:28:41.492566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.904 [2024-11-04 07:28:41.492595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:39.904 [2024-11-04 07:28:41.499721] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190dece0 00:22:39.904 [2024-11-04 07:28:41.500186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.904 [2024-11-04 07:28:41.500220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:39.904 [2024-11-04 07:28:41.508937] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190fa3a0 00:22:39.904 [2024-11-04 07:28:41.509969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.904 [2024-11-04 07:28:41.509996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:39.904 [2024-11-04 07:28:41.518447] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f6458 00:22:39.904 [2024-11-04 07:28:41.518961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.904 [2024-11-04 07:28:41.518984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:39.905 [2024-11-04 07:28:41.527893] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f2948 00:22:39.905 [2024-11-04 07:28:41.528687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.905 [2024-11-04 07:28:41.528715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:39.905 [2024-11-04 07:28:41.537253] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190ee5c8 00:22:39.905 [2024-11-04 07:28:41.537653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.905 [2024-11-04 07:28:41.537677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:39.905 [2024-11-04 07:28:41.546631] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f92c0 00:22:39.905 [2024-11-04 07:28:41.547132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:10686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.905 [2024-11-04 07:28:41.547166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:39.905 [2024-11-04 07:28:41.556103] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190fac10 00:22:39.905 [2024-11-04 07:28:41.557069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.905 [2024-11-04 07:28:41.557096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:39.905 [2024-11-04 07:28:41.565748] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190eee38 00:22:39.905 [2024-11-04 07:28:41.566614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.905 [2024-11-04 07:28:41.566645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:39.905 [2024-11-04 07:28:41.573768] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e1710 00:22:39.905 [2024-11-04 07:28:41.574627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.905 [2024-11-04 07:28:41.574655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:39.905 [2024-11-04 07:28:41.583280] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190ed0b0 00:22:39.905 [2024-11-04 07:28:41.583403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.905 [2024-11-04 07:28:41.583423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:39.905 [2024-11-04 07:28:41.592742] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f8618 00:22:39.905 [2024-11-04 07:28:41.593397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.905 [2024-11-04 07:28:41.593426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:39.905 [2024-11-04 07:28:41.602101] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e12d8 00:22:39.905 [2024-11-04 07:28:41.603173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.905 [2024-11-04 07:28:41.603214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:39.905 [2024-11-04 07:28:41.611050] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f2d80 00:22:39.905 [2024-11-04 07:28:41.611693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:3199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.905 [2024-11-04 07:28:41.611728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:39.905 [2024-11-04 07:28:41.621147] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f2948 00:22:39.905 [2024-11-04 07:28:41.621538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.905 [2024-11-04 07:28:41.621562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:39.905 [2024-11-04 07:28:41.631017] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f4b08 00:22:39.905 [2024-11-04 07:28:41.632075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.905 [2024-11-04 07:28:41.632114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:39.905 [2024-11-04 07:28:41.640423] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f2948 00:22:39.905 [2024-11-04 07:28:41.641423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.905 [2024-11-04 07:28:41.641451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:39.905 [2024-11-04 07:28:41.649727] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190fcdd0 00:22:39.905 [2024-11-04 07:28:41.650146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.905 [2024-11-04 07:28:41.650171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:39.905 [2024-11-04 07:28:41.659553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190fcdd0 00:22:39.905 [2024-11-04 07:28:41.660175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.905 [2024-11-04 07:28:41.660203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:39.905 [2024-11-04 07:28:41.668968] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f46d0 00:22:39.905 [2024-11-04 07:28:41.669503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.905 [2024-11-04 07:28:41.669531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:39.905 [2024-11-04 07:28:41.677537] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e23b8 00:22:39.905 [2024-11-04 07:28:41.678409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.905 [2024-11-04 07:28:41.678449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:39.905 [2024-11-04 07:28:41.686434] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190fe2e8 00:22:39.905 [2024-11-04 07:28:41.686528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.905 [2024-11-04 07:28:41.686548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:39.905 [2024-11-04 07:28:41.698074] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f2d80 00:22:39.905 [2024-11-04 07:28:41.699139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.905 [2024-11-04 07:28:41.699178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:39.905 [2024-11-04 07:28:41.706321] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f5378 00:22:39.905 [2024-11-04 07:28:41.707348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.905 [2024-11-04 07:28:41.707375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:39.905 [2024-11-04 07:28:41.716536] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190ee5c8 00:22:39.905 [2024-11-04 07:28:41.717774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.905 [2024-11-04 07:28:41.717811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:39.905 [2024-11-04 07:28:41.727727] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190df988 00:22:39.905 [2024-11-04 07:28:41.728246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.905 [2024-11-04 07:28:41.728298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:39.905 [2024-11-04 07:28:41.737985] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e9168 00:22:39.905 [2024-11-04 07:28:41.738903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.905 [2024-11-04 07:28:41.738929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:40.165 [2024-11-04 07:28:41.749185] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190de470 00:22:40.165 [2024-11-04 07:28:41.750038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.165 [2024-11-04 07:28:41.750075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:40.165 [2024-11-04 07:28:41.759050] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190fb8b8 00:22:40.165 [2024-11-04 07:28:41.759630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.165 [2024-11-04 07:28:41.759659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:40.165 [2024-11-04 07:28:41.768395] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e7c50 00:22:40.165 [2024-11-04 07:28:41.768990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.165 [2024-11-04 07:28:41.769019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:40.165 [2024-11-04 07:28:41.777688] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f8e88 00:22:40.165 [2024-11-04 07:28:41.778351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.165 [2024-11-04 07:28:41.778387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:40.165 [2024-11-04 07:28:41.787044] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e6300 00:22:40.165 [2024-11-04 07:28:41.787554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.165 [2024-11-04 07:28:41.787582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:40.165 [2024-11-04 07:28:41.796821] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e6300 00:22:40.165 [2024-11-04 07:28:41.797321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.165 [2024-11-04 07:28:41.797349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:40.165 [2024-11-04 07:28:41.805262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190dece0 00:22:40.165 [2024-11-04 07:28:41.806538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.165 [2024-11-04 07:28:41.806573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:40.165 [2024-11-04 07:28:41.815524] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190fac10 00:22:40.165 [2024-11-04 07:28:41.816338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.165 [2024-11-04 07:28:41.816390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:40.165 [2024-11-04 07:28:41.825150] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190fac10 00:22:40.165 [2024-11-04 07:28:41.825714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.165 [2024-11-04 07:28:41.825743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:40.165 [2024-11-04 07:28:41.834479] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e3060 00:22:40.165 [2024-11-04 07:28:41.835096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.165 [2024-11-04 07:28:41.835124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:40.165 [2024-11-04 07:28:41.843802] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e3d08 00:22:40.165 [2024-11-04 07:28:41.844392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.165 [2024-11-04 07:28:41.844420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:40.165 [2024-11-04 07:28:41.853003] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190ea248 00:22:40.165 [2024-11-04 07:28:41.853639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.165 [2024-11-04 07:28:41.853667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:40.165 [2024-11-04 07:28:41.862307] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f5be8 00:22:40.165 [2024-11-04 07:28:41.862936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.165 [2024-11-04 07:28:41.862964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:40.165 [2024-11-04 07:28:41.871536] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e27f0 00:22:40.165 [2024-11-04 07:28:41.872144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.165 [2024-11-04 07:28:41.872171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:40.165 [2024-11-04 07:28:41.881616] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190fb480 00:22:40.165 [2024-11-04 07:28:41.882588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.165 [2024-11-04 07:28:41.882615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:40.165 [2024-11-04 07:28:41.890986] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e84c0 00:22:40.165 [2024-11-04 07:28:41.891675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.165 [2024-11-04 07:28:41.891703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:40.165 [2024-11-04 07:28:41.899780] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190fa7d8 00:22:40.165 [2024-11-04 07:28:41.901139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.165 [2024-11-04 07:28:41.901178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:40.165 [2024-11-04 07:28:41.909201] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e5658 00:22:40.165 [2024-11-04 07:28:41.909866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.165 [2024-11-04 07:28:41.909907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:40.165 [2024-11-04 07:28:41.918459] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e3d08 00:22:40.165 [2024-11-04 07:28:41.919129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.165 [2024-11-04 07:28:41.919157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:40.165 [2024-11-04 07:28:41.927684] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190ee190 00:22:40.165 [2024-11-04 07:28:41.928357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.165 [2024-11-04 07:28:41.928389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:40.165 [2024-11-04 07:28:41.936832] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f5be8 00:22:40.165 [2024-11-04 07:28:41.938445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.165 [2024-11-04 07:28:41.938474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:40.165 [2024-11-04 07:28:41.946528] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190ef270 00:22:40.165 [2024-11-04 07:28:41.947014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.165 [2024-11-04 07:28:41.947039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:40.165 [2024-11-04 07:28:41.956220] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190fbcf0 00:22:40.165 [2024-11-04 07:28:41.957054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.165 [2024-11-04 07:28:41.957082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:40.165 [2024-11-04 07:28:41.964828] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e73e0 00:22:40.165 [2024-11-04 07:28:41.965151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.165 [2024-11-04 07:28:41.965176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:40.165 [2024-11-04 07:28:41.975003] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f9f68 00:22:40.166 [2024-11-04 07:28:41.975507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.166 [2024-11-04 07:28:41.975534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:40.166 [2024-11-04 07:28:41.984292] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f6020 00:22:40.166 [2024-11-04 07:28:41.985472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.166 [2024-11-04 07:28:41.985500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:40.166 [2024-11-04 07:28:41.993684] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f8e88 00:22:40.166 [2024-11-04 07:28:41.994624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.166 [2024-11-04 07:28:41.994651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:40.166 [2024-11-04 07:28:42.003541] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190fd208 00:22:40.425 [2024-11-04 07:28:42.004140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.425 [2024-11-04 07:28:42.004167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:40.425 [2024-11-04 07:28:42.012584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e49b0 00:22:40.425 [2024-11-04 07:28:42.013336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.425 [2024-11-04 07:28:42.013363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:40.425 [2024-11-04 07:28:42.022954] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190fdeb0 00:22:40.425 [2024-11-04 07:28:42.023562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.425 [2024-11-04 07:28:42.023590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.425 [2024-11-04 07:28:42.032202] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190ea248 00:22:40.425 [2024-11-04 07:28:42.032763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.425 [2024-11-04 07:28:42.032791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:40.425 [2024-11-04 07:28:42.041538] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e5220 00:22:40.425 [2024-11-04 07:28:42.042121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.425 [2024-11-04 07:28:42.042149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:40.425 [2024-11-04 07:28:42.050894] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f1430 00:22:40.425 [2024-11-04 07:28:42.051500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.425 [2024-11-04 07:28:42.051527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:40.425 [2024-11-04 07:28:42.061257] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e0ea0 00:22:40.425 [2024-11-04 07:28:42.062312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.425 [2024-11-04 07:28:42.062338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:40.425 [2024-11-04 07:28:42.069586] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f96f8 00:22:40.425 [2024-11-04 07:28:42.070194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.425 [2024-11-04 07:28:42.070224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:40.425 [2024-11-04 07:28:42.078861] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e6b70 00:22:40.425 [2024-11-04 07:28:42.079482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.425 [2024-11-04 07:28:42.079509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:40.425 [2024-11-04 07:28:42.088094] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e4140 00:22:40.425 [2024-11-04 07:28:42.088664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.425 [2024-11-04 07:28:42.088692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:40.425 [2024-11-04 07:28:42.097375] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f2d80 00:22:40.425 [2024-11-04 07:28:42.097883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.425 [2024-11-04 07:28:42.097908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:40.425 [2024-11-04 07:28:42.106629] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e4140 00:22:40.425 [2024-11-04 07:28:42.107167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.425 [2024-11-04 07:28:42.107192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:40.425 [2024-11-04 07:28:42.117039] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e6b70 00:22:40.425 [2024-11-04 07:28:42.117548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.425 [2024-11-04 07:28:42.117577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:40.425 [2024-11-04 07:28:42.127872] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f1868 00:22:40.425 [2024-11-04 07:28:42.128433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.425 [2024-11-04 07:28:42.128461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:40.425 [2024-11-04 07:28:42.137963] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e1b48 00:22:40.425 [2024-11-04 07:28:42.138607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.425 [2024-11-04 07:28:42.138634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:40.425 [2024-11-04 07:28:42.149222] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190ee5c8 00:22:40.425 [2024-11-04 07:28:42.150434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.425 [2024-11-04 07:28:42.150462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:40.425 [2024-11-04 07:28:42.158695] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190eb760 00:22:40.425 [2024-11-04 07:28:42.159318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.426 [2024-11-04 07:28:42.159345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:40.426 [2024-11-04 07:28:42.169621] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e38d0 00:22:40.426 [2024-11-04 07:28:42.170682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.426 [2024-11-04 07:28:42.170723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:40.426 [2024-11-04 07:28:42.178805] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e4578 00:22:40.426 [2024-11-04 07:28:42.179720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.426 [2024-11-04 07:28:42.179748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:40.426 [2024-11-04 07:28:42.188418] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f5378 00:22:40.426 [2024-11-04 07:28:42.188822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.426 [2024-11-04 07:28:42.188847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:40.426 [2024-11-04 07:28:42.198204] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190fc998 00:22:40.426 [2024-11-04 07:28:42.199270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.426 [2024-11-04 07:28:42.199310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:40.426 [2024-11-04 07:28:42.207858] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f0bc0 00:22:40.426 [2024-11-04 07:28:42.208323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.426 [2024-11-04 07:28:42.208355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:40.426 [2024-11-04 07:28:42.217328] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f6020 00:22:40.426 [2024-11-04 07:28:42.217603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.426 [2024-11-04 07:28:42.217628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:40.426 [2024-11-04 07:28:42.227423] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f0350 00:22:40.426 [2024-11-04 07:28:42.228215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.426 [2024-11-04 07:28:42.228243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:40.426 [2024-11-04 07:28:42.237320] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190fef90 00:22:40.426 [2024-11-04 07:28:42.238204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.426 [2024-11-04 07:28:42.238230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:40.426 [2024-11-04 07:28:42.247314] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f0350 00:22:40.426 [2024-11-04 07:28:42.248437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.426 [2024-11-04 07:28:42.248465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:40.426 [2024-11-04 07:28:42.256816] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f1868 00:22:40.426 [2024-11-04 07:28:42.257095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.426 [2024-11-04 07:28:42.257120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:40.685 [2024-11-04 07:28:42.267142] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190edd58 00:22:40.685 [2024-11-04 07:28:42.267586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.685 [2024-11-04 07:28:42.267610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:40.685 [2024-11-04 07:28:42.277141] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f7100 00:22:40.685 [2024-11-04 07:28:42.277766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.685 [2024-11-04 07:28:42.277795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:40.685 [2024-11-04 07:28:42.287782] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190fd640 00:22:40.685 [2024-11-04 07:28:42.288242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.685 [2024-11-04 07:28:42.288290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:40.685 [2024-11-04 07:28:42.297826] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e5a90 00:22:40.685 [2024-11-04 07:28:42.298843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.685 [2024-11-04 07:28:42.298882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:40.685 [2024-11-04 07:28:42.307606] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190fda78 00:22:40.685 [2024-11-04 07:28:42.308092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.685 [2024-11-04 07:28:42.308118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:40.685 [2024-11-04 07:28:42.318492] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190fa7d8 00:22:40.685 [2024-11-04 07:28:42.319906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.685 [2024-11-04 07:28:42.319933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:40.686 [2024-11-04 07:28:42.327177] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f5378 00:22:40.686 [2024-11-04 07:28:42.327567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.686 [2024-11-04 07:28:42.327603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:40.686 [2024-11-04 07:28:42.339934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190df118 00:22:40.686 [2024-11-04 07:28:42.340531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.686 [2024-11-04 07:28:42.340560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.686 [2024-11-04 07:28:42.351149] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f7970 00:22:40.686 [2024-11-04 07:28:42.352152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.686 [2024-11-04 07:28:42.352179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:40.686 [2024-11-04 07:28:42.360811] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f2510 00:22:40.686 [2024-11-04 07:28:42.361135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.686 [2024-11-04 07:28:42.361155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:40.686 [2024-11-04 07:28:42.372575] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e73e0 00:22:40.686 [2024-11-04 07:28:42.373790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.686 [2024-11-04 07:28:42.373816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.686 [2024-11-04 07:28:42.379883] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190ea248 00:22:40.686 [2024-11-04 07:28:42.380832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.686 [2024-11-04 07:28:42.380861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:40.686 [2024-11-04 07:28:42.388768] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190ecc78 00:22:40.686 [2024-11-04 07:28:42.388961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.686 [2024-11-04 07:28:42.388981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:40.686 [2024-11-04 07:28:42.398548] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f81e0 00:22:40.686 [2024-11-04 07:28:42.399104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.686 [2024-11-04 07:28:42.399130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.686 [2024-11-04 07:28:42.407700] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e5a90 00:22:40.686 [2024-11-04 07:28:42.408813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.686 [2024-11-04 07:28:42.408841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.686 [2024-11-04 07:28:42.417740] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e8d30 00:22:40.686 [2024-11-04 07:28:42.418935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.686 [2024-11-04 07:28:42.418962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:40.686 [2024-11-04 07:28:42.427846] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f1ca0 00:22:40.686 [2024-11-04 07:28:42.428554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.686 [2024-11-04 07:28:42.428582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:40.686 [2024-11-04 07:28:42.436097] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f2510 00:22:40.686 [2024-11-04 07:28:42.436847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.686 [2024-11-04 07:28:42.436884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:40.686 [2024-11-04 07:28:42.445485] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190fb480 00:22:40.686 [2024-11-04 07:28:42.446144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.686 [2024-11-04 07:28:42.446172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:40.686 [2024-11-04 07:28:42.455505] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e84c0 00:22:40.686 [2024-11-04 07:28:42.456445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.686 [2024-11-04 07:28:42.456473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:40.686 [2024-11-04 07:28:42.464799] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f1868 00:22:40.686 [2024-11-04 07:28:42.465241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.686 [2024-11-04 07:28:42.465275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:40.686 [2024-11-04 07:28:42.474585] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190eb328 00:22:40.686 [2024-11-04 07:28:42.475828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.686 [2024-11-04 07:28:42.475856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:40.686 [2024-11-04 07:28:42.484141] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e3498 00:22:40.686 [2024-11-04 07:28:42.485223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:89 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.686 [2024-11-04 07:28:42.485250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:40.686 [2024-11-04 07:28:42.493177] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190ff3c8 00:22:40.686 [2024-11-04 07:28:42.494070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.686 [2024-11-04 07:28:42.494097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.686 [2024-11-04 07:28:42.501856] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e6738 00:22:40.686 [2024-11-04 07:28:42.502348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.686 [2024-11-04 07:28:42.502375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:40.686 [2024-11-04 07:28:42.513272] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f8e88 00:22:40.686 [2024-11-04 07:28:42.514026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.686 [2024-11-04 07:28:42.514052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.686 [2024-11-04 07:28:42.521066] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e7c50 00:22:40.686 [2024-11-04 07:28:42.522076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.686 [2024-11-04 07:28:42.522109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:40.946 [2024-11-04 07:28:42.531323] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190fc560 00:22:40.946 [2024-11-04 07:28:42.532168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.946 [2024-11-04 07:28:42.532209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:40.946 [2024-11-04 07:28:42.541171] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190ec408 00:22:40.946 [2024-11-04 07:28:42.541496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.946 [2024-11-04 07:28:42.541520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:40.946 [2024-11-04 07:28:42.550767] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190ed4e8 00:22:40.946 [2024-11-04 07:28:42.551687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.946 [2024-11-04 07:28:42.551716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:40.946 [2024-11-04 07:28:42.560272] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190eff18 00:22:40.946 [2024-11-04 07:28:42.561130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.946 [2024-11-04 07:28:42.561169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:40.946 [2024-11-04 07:28:42.569958] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190eaef0 00:22:40.946 [2024-11-04 07:28:42.571152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.946 [2024-11-04 07:28:42.571180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:40.946 [2024-11-04 07:28:42.579612] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f8618 00:22:40.946 [2024-11-04 07:28:42.580171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.946 [2024-11-04 07:28:42.580200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.946 [2024-11-04 07:28:42.589070] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e49b0 00:22:40.946 [2024-11-04 07:28:42.589695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.946 [2024-11-04 07:28:42.589724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:40.946 [2024-11-04 07:28:42.598615] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e95a0 00:22:40.946 [2024-11-04 07:28:42.599679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.946 [2024-11-04 07:28:42.599707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:40.946 [2024-11-04 07:28:42.608814] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f1ca0 00:22:40.946 [2024-11-04 07:28:42.610240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.946 [2024-11-04 07:28:42.610279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.946 [2024-11-04 07:28:42.616499] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f3e60 00:22:40.946 [2024-11-04 07:28:42.617250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.946 [2024-11-04 07:28:42.617277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:40.946 [2024-11-04 07:28:42.625851] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f1ca0 00:22:40.946 [2024-11-04 07:28:42.627021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.946 [2024-11-04 07:28:42.627050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:40.946 [2024-11-04 07:28:42.635409] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190ea680 00:22:40.946 [2024-11-04 07:28:42.635924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.946 [2024-11-04 07:28:42.635949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:40.946 [2024-11-04 07:28:42.644689] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f7970 00:22:40.946 [2024-11-04 07:28:42.645313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.946 [2024-11-04 07:28:42.645342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:40.946 [2024-11-04 07:28:42.653998] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f6458 00:22:40.946 [2024-11-04 07:28:42.654396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.946 [2024-11-04 07:28:42.654421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:40.946 [2024-11-04 07:28:42.663428] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e84c0 00:22:40.946 [2024-11-04 07:28:42.664072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.946 [2024-11-04 07:28:42.664100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:40.946 [2024-11-04 07:28:42.672885] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f0350 00:22:40.946 [2024-11-04 07:28:42.673282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.946 [2024-11-04 07:28:42.673307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:40.946 [2024-11-04 07:28:42.682172] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190ea248 00:22:40.946 [2024-11-04 07:28:42.682582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.946 [2024-11-04 07:28:42.682615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:40.946 [2024-11-04 07:28:42.691526] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190eb760 00:22:40.947 [2024-11-04 07:28:42.691889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:17764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.947 [2024-11-04 07:28:42.691913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:40.947 [2024-11-04 07:28:42.701632] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190de8a8 00:22:40.947 [2024-11-04 07:28:42.702859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.947 [2024-11-04 07:28:42.702902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.947 [2024-11-04 07:28:42.711166] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190de038 00:22:40.947 [2024-11-04 07:28:42.711795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.947 [2024-11-04 07:28:42.711823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.947 [2024-11-04 07:28:42.720112] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190eea00 00:22:40.947 [2024-11-04 07:28:42.721071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.947 [2024-11-04 07:28:42.721098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:40.947 [2024-11-04 07:28:42.729183] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f4b08 00:22:40.947 [2024-11-04 07:28:42.729398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.947 [2024-11-04 07:28:42.729418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:40.947 [2024-11-04 07:28:42.740170] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e88f8 00:22:40.947 [2024-11-04 07:28:42.740991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.947 [2024-11-04 07:28:42.741018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:40.947 [2024-11-04 07:28:42.751750] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190de038 00:22:40.947 [2024-11-04 07:28:42.752629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.947 [2024-11-04 07:28:42.752662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:40.947 [2024-11-04 07:28:42.762502] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e0ea0 00:22:40.947 [2024-11-04 07:28:42.762922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.947 [2024-11-04 07:28:42.762960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:40.947 [2024-11-04 07:28:42.772328] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f6020 00:22:40.947 [2024-11-04 07:28:42.772811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.947 [2024-11-04 07:28:42.772840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:40.947 [2024-11-04 07:28:42.783064] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f46d0 00:22:40.947 [2024-11-04 07:28:42.784222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.947 [2024-11-04 07:28:42.784250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:41.206 [2024-11-04 07:28:42.792657] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190edd58 00:22:41.206 [2024-11-04 07:28:42.793297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.206 [2024-11-04 07:28:42.793326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:41.206 [2024-11-04 07:28:42.802124] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190eaef0 00:22:41.206 [2024-11-04 07:28:42.803288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.206 [2024-11-04 07:28:42.803317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:41.206 [2024-11-04 07:28:42.810706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190eaab8 00:22:41.206 [2024-11-04 07:28:42.811235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.206 [2024-11-04 07:28:42.811268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:41.206 [2024-11-04 07:28:42.820853] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f6cc8 00:22:41.206 [2024-11-04 07:28:42.821655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.206 [2024-11-04 07:28:42.821689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:41.206 [2024-11-04 07:28:42.830498] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190ecc78 00:22:41.206 [2024-11-04 07:28:42.831093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.206 [2024-11-04 07:28:42.831122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:41.206 [2024-11-04 07:28:42.840179] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f3e60 00:22:41.206 [2024-11-04 07:28:42.840785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:10653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.206 [2024-11-04 07:28:42.840813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:41.206 [2024-11-04 07:28:42.849622] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190dfdc0 00:22:41.206 [2024-11-04 07:28:42.850627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.206 [2024-11-04 07:28:42.850654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:41.206 [2024-11-04 07:28:42.859123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e3498 00:22:41.207 [2024-11-04 07:28:42.860069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.207 [2024-11-04 07:28:42.860097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:41.207 [2024-11-04 07:28:42.867836] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f7970 00:22:41.207 [2024-11-04 07:28:42.868269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.207 [2024-11-04 07:28:42.868293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:41.207 [2024-11-04 07:28:42.876918] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f7538 00:22:41.207 [2024-11-04 07:28:42.878081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.207 [2024-11-04 07:28:42.878109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:41.207 [2024-11-04 07:28:42.886270] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e0a68 00:22:41.207 [2024-11-04 07:28:42.887305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.207 [2024-11-04 07:28:42.887337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:41.207 [2024-11-04 07:28:42.895244] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f6cc8 00:22:41.207 [2024-11-04 07:28:42.896030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.207 [2024-11-04 07:28:42.896084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:41.207 [2024-11-04 07:28:42.905054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190ee5c8 00:22:41.207 [2024-11-04 07:28:42.906074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.207 [2024-11-04 07:28:42.906113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:41.207 [2024-11-04 07:28:42.914445] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190ed4e8 00:22:41.207 [2024-11-04 07:28:42.914858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.207 [2024-11-04 07:28:42.914907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:41.207 [2024-11-04 07:28:42.924041] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190ed4e8 00:22:41.207 [2024-11-04 07:28:42.924603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.207 [2024-11-04 07:28:42.924632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:41.207 [2024-11-04 07:28:42.932617] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190fd640 00:22:41.207 [2024-11-04 07:28:42.933530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.207 [2024-11-04 07:28:42.933558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:41.207 [2024-11-04 07:28:42.942090] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190efae0 00:22:41.207 [2024-11-04 07:28:42.942643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.207 [2024-11-04 07:28:42.942672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:41.207 [2024-11-04 07:28:42.951551] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e27f0 00:22:41.207 [2024-11-04 07:28:42.951949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.207 [2024-11-04 07:28:42.951984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:41.207 [2024-11-04 07:28:42.961243] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e0ea0 00:22:41.207 [2024-11-04 07:28:42.961749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.207 [2024-11-04 07:28:42.961780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:41.207 [2024-11-04 07:28:42.970434] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f31b8 00:22:41.207 [2024-11-04 07:28:42.970972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.207 [2024-11-04 07:28:42.971020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:41.207 [2024-11-04 07:28:42.981660] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e49b0 00:22:41.207 [2024-11-04 07:28:42.983251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.207 [2024-11-04 07:28:42.983279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.207 [2024-11-04 07:28:42.991004] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190ed920 00:22:41.207 [2024-11-04 07:28:42.992402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.207 [2024-11-04 07:28:42.992430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:41.207 [2024-11-04 07:28:43.000303] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e0ea0 00:22:41.207 [2024-11-04 07:28:43.001717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.207 [2024-11-04 07:28:43.001744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:41.207 [2024-11-04 07:28:43.010482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190df988 00:22:41.207 [2024-11-04 07:28:43.011752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.207 [2024-11-04 07:28:43.011778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:41.207 [2024-11-04 07:28:43.017798] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190ed0b0 00:22:41.207 [2024-11-04 07:28:43.018506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.207 [2024-11-04 07:28:43.018534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:41.207 [2024-11-04 07:28:43.027297] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190fd208 00:22:41.207 [2024-11-04 07:28:43.027982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.207 [2024-11-04 07:28:43.028009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:41.207 [2024-11-04 07:28:43.036470] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190ebb98 00:22:41.207 [2024-11-04 07:28:43.037459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.207 [2024-11-04 07:28:43.037487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:41.467 [2024-11-04 07:28:43.046355] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190fd208 00:22:41.467 [2024-11-04 07:28:43.047431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.467 [2024-11-04 07:28:43.047459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:41.467 [2024-11-04 07:28:43.056328] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190fb480 00:22:41.467 [2024-11-04 07:28:43.056569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.467 [2024-11-04 07:28:43.056598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:41.467 [2024-11-04 07:28:43.065907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190eaab8 00:22:41.467 [2024-11-04 07:28:43.066904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.467 [2024-11-04 07:28:43.066941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:41.467 [2024-11-04 07:28:43.075424] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e38d0 00:22:41.467 [2024-11-04 07:28:43.076441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.467 [2024-11-04 07:28:43.076469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:41.467 [2024-11-04 07:28:43.084966] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e5a90 00:22:41.467 [2024-11-04 07:28:43.085701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.467 [2024-11-04 07:28:43.085728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:41.467 [2024-11-04 07:28:43.094622] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f1ca0 00:22:41.467 [2024-11-04 07:28:43.095230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.467 [2024-11-04 07:28:43.095258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:41.467 [2024-11-04 07:28:43.104235] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e3d08 00:22:41.467 [2024-11-04 07:28:43.104798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.467 [2024-11-04 07:28:43.104828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:41.467 [2024-11-04 07:28:43.112854] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190ec408 00:22:41.467 [2024-11-04 07:28:43.113621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.467 [2024-11-04 07:28:43.113657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:41.467 [2024-11-04 07:28:43.121935] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190ea248 00:22:41.467 [2024-11-04 07:28:43.122046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.467 [2024-11-04 07:28:43.122067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:41.467 [2024-11-04 07:28:43.131271] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190edd58 00:22:41.467 [2024-11-04 07:28:43.131713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.467 [2024-11-04 07:28:43.131741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:41.467 [2024-11-04 07:28:43.140603] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f1868 00:22:41.467 [2024-11-04 07:28:43.141077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.467 [2024-11-04 07:28:43.141102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:41.467 [2024-11-04 07:28:43.151459] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f9f68 00:22:41.467 [2024-11-04 07:28:43.152778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.467 [2024-11-04 07:28:43.152805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.467 [2024-11-04 07:28:43.161430] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190ee190 00:22:41.467 [2024-11-04 07:28:43.162900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.467 [2024-11-04 07:28:43.162938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.467 [2024-11-04 07:28:43.169653] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e5220 00:22:41.467 [2024-11-04 07:28:43.170490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.467 [2024-11-04 07:28:43.170524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.467 [2024-11-04 07:28:43.180502] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f92c0 00:22:41.467 [2024-11-04 07:28:43.181301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.467 [2024-11-04 07:28:43.181340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.467 [2024-11-04 07:28:43.188287] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f46d0 00:22:41.467 [2024-11-04 07:28:43.189243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.467 [2024-11-04 07:28:43.189270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:41.467 [2024-11-04 07:28:43.197961] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f9f68 00:22:41.467 [2024-11-04 07:28:43.198298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.467 [2024-11-04 07:28:43.198322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:41.467 [2024-11-04 07:28:43.207857] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190eb760 00:22:41.467 [2024-11-04 07:28:43.208397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.467 [2024-11-04 07:28:43.208420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:41.467 [2024-11-04 07:28:43.217107] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e0ea0 00:22:41.467 [2024-11-04 07:28:43.218153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.467 [2024-11-04 07:28:43.218180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:41.467 [2024-11-04 07:28:43.226344] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190eea00 00:22:41.467 [2024-11-04 07:28:43.227581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.467 [2024-11-04 07:28:43.227610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:41.467 [2024-11-04 07:28:43.235607] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e3d08 00:22:41.467 [2024-11-04 07:28:43.235972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.467 [2024-11-04 07:28:43.235996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:41.467 [2024-11-04 07:28:43.244673] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190fd208 00:22:41.467 [2024-11-04 07:28:43.245740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.467 [2024-11-04 07:28:43.245767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:41.467 [2024-11-04 07:28:43.254032] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190df988 00:22:41.467 [2024-11-04 07:28:43.255004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.467 [2024-11-04 07:28:43.255043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:41.467 [2024-11-04 07:28:43.263437] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e0630 00:22:41.468 [2024-11-04 07:28:43.263606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.468 [2024-11-04 07:28:43.263625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.468 [2024-11-04 07:28:43.273021] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190fe2e8 00:22:41.468 [2024-11-04 07:28:43.273422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.468 [2024-11-04 07:28:43.273448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:41.468 [2024-11-04 07:28:43.282272] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f5be8 00:22:41.468 [2024-11-04 07:28:43.282583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.468 [2024-11-04 07:28:43.282617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.468 [2024-11-04 07:28:43.291534] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190e4578 00:22:41.468 [2024-11-04 07:28:43.291803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.468 [2024-11-04 07:28:43.291830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:41.468 [2024-11-04 07:28:43.300854] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190fda78 00:22:41.468 [2024-11-04 07:28:43.301202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.468 [2024-11-04 07:28:43.301227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:41.727 [2024-11-04 07:28:43.311197] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190f9b30 00:22:41.727 [2024-11-04 07:28:43.311991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.727 [2024-11-04 07:28:43.312019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:41.727 [2024-11-04 07:28:43.321660] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190ed0b0 00:22:41.727 [2024-11-04 07:28:43.322097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.727 [2024-11-04 07:28:43.322121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:41.727 [2024-11-04 07:28:43.332453] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b0e0) with pdu=0x2000190fb480 00:22:41.727 [2024-11-04 07:28:43.332733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.727 [2024-11-04 07:28:43.332757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:41.727 00:22:41.727 Latency(us) 00:22:41.727 [2024-11-04T07:28:43.568Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.727 [2024-11-04T07:28:43.568Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:41.727 nvme0n1 : 2.00 26507.69 103.55 0.00 0.00 4823.81 1869.27 12749.73 00:22:41.727 [2024-11-04T07:28:43.568Z] =================================================================================================================== 00:22:41.727 [2024-11-04T07:28:43.568Z] Total : 26507.69 103.55 0.00 0.00 4823.81 1869.27 12749.73 00:22:41.727 0 00:22:41.727 07:28:43 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:41.727 07:28:43 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:41.727 07:28:43 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:41.727 07:28:43 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:41.727 | .driver_specific 00:22:41.727 | .nvme_error 00:22:41.727 | .status_code 00:22:41.727 | .command_transient_transport_error' 00:22:41.985 07:28:43 -- host/digest.sh@71 -- # (( 208 > 0 )) 00:22:41.985 07:28:43 -- host/digest.sh@73 -- # killprocess 97557 00:22:41.985 07:28:43 -- common/autotest_common.sh@926 -- # '[' -z 97557 ']' 00:22:41.985 07:28:43 -- common/autotest_common.sh@930 -- # kill -0 97557 00:22:41.985 07:28:43 -- common/autotest_common.sh@931 -- # uname 00:22:41.985 07:28:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:41.985 07:28:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97557 00:22:41.985 07:28:43 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:41.985 07:28:43 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:41.985 07:28:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97557' 00:22:41.985 killing process with pid 97557 00:22:41.985 07:28:43 -- common/autotest_common.sh@945 -- # kill 97557 00:22:41.985 Received shutdown signal, test time was about 2.000000 seconds 00:22:41.985 00:22:41.985 Latency(us) 00:22:41.985 [2024-11-04T07:28:43.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.985 [2024-11-04T07:28:43.826Z] =================================================================================================================== 00:22:41.985 [2024-11-04T07:28:43.826Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:41.985 07:28:43 -- common/autotest_common.sh@950 -- # wait 97557 00:22:42.244 07:28:43 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:22:42.244 07:28:43 -- host/digest.sh@54 -- # local rw bs qd 00:22:42.244 07:28:43 -- host/digest.sh@56 -- # rw=randwrite 00:22:42.244 07:28:43 -- host/digest.sh@56 -- # bs=131072 00:22:42.244 07:28:43 -- host/digest.sh@56 -- # qd=16 00:22:42.244 07:28:43 -- host/digest.sh@58 -- # bperfpid=97653 00:22:42.244 07:28:43 -- host/digest.sh@60 -- # waitforlisten 97653 /var/tmp/bperf.sock 00:22:42.244 07:28:43 -- common/autotest_common.sh@819 -- # '[' -z 97653 ']' 00:22:42.244 07:28:43 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:22:42.244 07:28:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:42.244 07:28:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:42.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:42.244 07:28:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:42.244 07:28:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:42.244 07:28:43 -- common/autotest_common.sh@10 -- # set +x 00:22:42.244 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:42.244 Zero copy mechanism will not be used. 00:22:42.244 [2024-11-04 07:28:43.939271] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:22:42.244 [2024-11-04 07:28:43.939376] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97653 ] 00:22:42.244 [2024-11-04 07:28:44.076703] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.503 [2024-11-04 07:28:44.137636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.070 07:28:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:43.070 07:28:44 -- common/autotest_common.sh@852 -- # return 0 00:22:43.070 07:28:44 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:43.070 07:28:44 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:43.328 07:28:45 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:43.328 07:28:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:43.328 07:28:45 -- common/autotest_common.sh@10 -- # set +x 00:22:43.328 07:28:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:43.329 07:28:45 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:43.329 07:28:45 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:43.587 nvme0n1 00:22:43.587 07:28:45 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:43.587 07:28:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:43.587 07:28:45 -- common/autotest_common.sh@10 -- # set +x 00:22:43.587 07:28:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:43.587 07:28:45 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:43.587 07:28:45 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:43.587 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:43.587 Zero copy mechanism will not be used. 00:22:43.587 Running I/O for 2 seconds... 00:22:43.587 [2024-11-04 07:28:45.418066] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.587 [2024-11-04 07:28:45.418413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.587 [2024-11-04 07:28:45.418458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.587 [2024-11-04 07:28:45.422553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.587 [2024-11-04 07:28:45.422712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.587 [2024-11-04 07:28:45.422735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.847 [2024-11-04 07:28:45.427482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.847 [2024-11-04 07:28:45.427583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.847 [2024-11-04 07:28:45.427605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.847 [2024-11-04 07:28:45.432318] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.847 [2024-11-04 07:28:45.432401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.847 [2024-11-04 07:28:45.432422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.847 [2024-11-04 07:28:45.436642] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.847 [2024-11-04 07:28:45.436725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.847 [2024-11-04 07:28:45.436746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.847 [2024-11-04 07:28:45.441073] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.847 [2024-11-04 07:28:45.441146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.847 [2024-11-04 07:28:45.441168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.847 [2024-11-04 07:28:45.445536] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.847 [2024-11-04 07:28:45.445642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.847 [2024-11-04 07:28:45.445663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.847 [2024-11-04 07:28:45.450049] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.847 [2024-11-04 07:28:45.450240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.847 [2024-11-04 07:28:45.450262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.847 [2024-11-04 07:28:45.454359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.847 [2024-11-04 07:28:45.454552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.847 [2024-11-04 07:28:45.454593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.847 [2024-11-04 07:28:45.458786] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.847 [2024-11-04 07:28:45.458939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.847 [2024-11-04 07:28:45.458961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.847 [2024-11-04 07:28:45.463151] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.847 [2024-11-04 07:28:45.463239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.847 [2024-11-04 07:28:45.463261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.847 [2024-11-04 07:28:45.467481] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.848 [2024-11-04 07:28:45.467558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.848 [2024-11-04 07:28:45.467579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.848 [2024-11-04 07:28:45.472043] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.848 [2024-11-04 07:28:45.472122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.848 [2024-11-04 07:28:45.472144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.848 [2024-11-04 07:28:45.476439] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.848 [2024-11-04 07:28:45.476562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.848 [2024-11-04 07:28:45.476584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.848 [2024-11-04 07:28:45.480700] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.848 [2024-11-04 07:28:45.480895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.848 [2024-11-04 07:28:45.480917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.848 [2024-11-04 07:28:45.485215] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.848 [2024-11-04 07:28:45.485391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.848 [2024-11-04 07:28:45.485413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.848 [2024-11-04 07:28:45.489453] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.848 [2024-11-04 07:28:45.489603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.848 [2024-11-04 07:28:45.489624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.848 [2024-11-04 07:28:45.493847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.848 [2024-11-04 07:28:45.494044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.848 [2024-11-04 07:28:45.494066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.848 [2024-11-04 07:28:45.498317] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.848 [2024-11-04 07:28:45.498414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.848 [2024-11-04 07:28:45.498435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.848 [2024-11-04 07:28:45.502677] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.848 [2024-11-04 07:28:45.502793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.848 [2024-11-04 07:28:45.502814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.848 [2024-11-04 07:28:45.507207] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.848 [2024-11-04 07:28:45.507284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.848 [2024-11-04 07:28:45.507306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.848 [2024-11-04 07:28:45.511510] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.848 [2024-11-04 07:28:45.511637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.848 [2024-11-04 07:28:45.511659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.848 [2024-11-04 07:28:45.515831] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.848 [2024-11-04 07:28:45.516021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.848 [2024-11-04 07:28:45.516043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.848 [2024-11-04 07:28:45.520288] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.848 [2024-11-04 07:28:45.520462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.848 [2024-11-04 07:28:45.520483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.848 [2024-11-04 07:28:45.524533] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.848 [2024-11-04 07:28:45.524795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.848 [2024-11-04 07:28:45.524827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.848 [2024-11-04 07:28:45.528817] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.848 [2024-11-04 07:28:45.529058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.848 [2024-11-04 07:28:45.529081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.848 [2024-11-04 07:28:45.533104] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.848 [2024-11-04 07:28:45.533205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.848 [2024-11-04 07:28:45.533226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.848 [2024-11-04 07:28:45.537453] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.848 [2024-11-04 07:28:45.537552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.848 [2024-11-04 07:28:45.537574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.848 [2024-11-04 07:28:45.541936] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.848 [2024-11-04 07:28:45.542064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.848 [2024-11-04 07:28:45.542084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.848 [2024-11-04 07:28:45.546389] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.848 [2024-11-04 07:28:45.546514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.848 [2024-11-04 07:28:45.546535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.848 [2024-11-04 07:28:45.550741] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.848 [2024-11-04 07:28:45.550951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.848 [2024-11-04 07:28:45.550972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.848 [2024-11-04 07:28:45.555261] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.848 [2024-11-04 07:28:45.555440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.848 [2024-11-04 07:28:45.555461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.848 [2024-11-04 07:28:45.559529] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.848 [2024-11-04 07:28:45.559756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.848 [2024-11-04 07:28:45.559777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.848 [2024-11-04 07:28:45.563784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.848 [2024-11-04 07:28:45.563942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.848 [2024-11-04 07:28:45.563964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.848 [2024-11-04 07:28:45.568176] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.848 [2024-11-04 07:28:45.568280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.848 [2024-11-04 07:28:45.568302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.848 [2024-11-04 07:28:45.572562] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.848 [2024-11-04 07:28:45.572639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.848 [2024-11-04 07:28:45.572661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.848 [2024-11-04 07:28:45.576844] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.848 [2024-11-04 07:28:45.576934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.848 [2024-11-04 07:28:45.576956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.848 [2024-11-04 07:28:45.581203] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.848 [2024-11-04 07:28:45.581330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.848 [2024-11-04 07:28:45.581351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.849 [2024-11-04 07:28:45.585543] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.849 [2024-11-04 07:28:45.585744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.849 [2024-11-04 07:28:45.585764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.849 [2024-11-04 07:28:45.590057] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.849 [2024-11-04 07:28:45.590248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.849 [2024-11-04 07:28:45.590268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.849 [2024-11-04 07:28:45.594356] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.849 [2024-11-04 07:28:45.594596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.849 [2024-11-04 07:28:45.594629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.849 [2024-11-04 07:28:45.598729] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.849 [2024-11-04 07:28:45.598923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.849 [2024-11-04 07:28:45.598945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.849 [2024-11-04 07:28:45.603168] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.849 [2024-11-04 07:28:45.603253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.849 [2024-11-04 07:28:45.603274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.849 [2024-11-04 07:28:45.607477] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.849 [2024-11-04 07:28:45.607589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.849 [2024-11-04 07:28:45.607611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.849 [2024-11-04 07:28:45.611654] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.849 [2024-11-04 07:28:45.611769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.849 [2024-11-04 07:28:45.611789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.849 [2024-11-04 07:28:45.615991] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.849 [2024-11-04 07:28:45.616116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.849 [2024-11-04 07:28:45.616137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.849 [2024-11-04 07:28:45.620180] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.849 [2024-11-04 07:28:45.620338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.849 [2024-11-04 07:28:45.620359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.849 [2024-11-04 07:28:45.624604] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.849 [2024-11-04 07:28:45.624786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.849 [2024-11-04 07:28:45.624807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.849 [2024-11-04 07:28:45.628925] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.849 [2024-11-04 07:28:45.629057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.849 [2024-11-04 07:28:45.629078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.849 [2024-11-04 07:28:45.633257] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.849 [2024-11-04 07:28:45.633335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.849 [2024-11-04 07:28:45.633356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.849 [2024-11-04 07:28:45.637677] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.849 [2024-11-04 07:28:45.637793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.849 [2024-11-04 07:28:45.637815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.849 [2024-11-04 07:28:45.642070] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.849 [2024-11-04 07:28:45.642170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.849 [2024-11-04 07:28:45.642191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.849 [2024-11-04 07:28:45.646387] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.849 [2024-11-04 07:28:45.646462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.849 [2024-11-04 07:28:45.646483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.849 [2024-11-04 07:28:45.650734] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.849 [2024-11-04 07:28:45.650890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.849 [2024-11-04 07:28:45.650920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.849 [2024-11-04 07:28:45.655119] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.849 [2024-11-04 07:28:45.655313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.849 [2024-11-04 07:28:45.655335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.849 [2024-11-04 07:28:45.659523] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.849 [2024-11-04 07:28:45.659694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.849 [2024-11-04 07:28:45.659716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.849 [2024-11-04 07:28:45.663825] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.849 [2024-11-04 07:28:45.664097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.849 [2024-11-04 07:28:45.664128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.849 [2024-11-04 07:28:45.668081] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.849 [2024-11-04 07:28:45.668175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.849 [2024-11-04 07:28:45.668196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.849 [2024-11-04 07:28:45.672398] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.849 [2024-11-04 07:28:45.672473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.849 [2024-11-04 07:28:45.672494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.849 [2024-11-04 07:28:45.676646] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.849 [2024-11-04 07:28:45.676738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.849 [2024-11-04 07:28:45.676759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.849 [2024-11-04 07:28:45.681053] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:43.849 [2024-11-04 07:28:45.681129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.849 [2024-11-04 07:28:45.681149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.110 [2024-11-04 07:28:45.685784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.110 [2024-11-04 07:28:45.685956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-11-04 07:28:45.685979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.110 [2024-11-04 07:28:45.690339] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.110 [2024-11-04 07:28:45.690505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-11-04 07:28:45.690526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.110 [2024-11-04 07:28:45.695264] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.110 [2024-11-04 07:28:45.695456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-11-04 07:28:45.695477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.110 [2024-11-04 07:28:45.699639] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.110 [2024-11-04 07:28:45.699927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-11-04 07:28:45.699960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.110 [2024-11-04 07:28:45.703950] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.110 [2024-11-04 07:28:45.704063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-11-04 07:28:45.704085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.110 [2024-11-04 07:28:45.708338] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.110 [2024-11-04 07:28:45.708455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-11-04 07:28:45.708476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.110 [2024-11-04 07:28:45.712758] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.110 [2024-11-04 07:28:45.712841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-11-04 07:28:45.712862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.110 [2024-11-04 07:28:45.717146] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.110 [2024-11-04 07:28:45.717225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-11-04 07:28:45.717245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.110 [2024-11-04 07:28:45.721425] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.110 [2024-11-04 07:28:45.721548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-11-04 07:28:45.721569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.110 [2024-11-04 07:28:45.725721] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.110 [2024-11-04 07:28:45.725926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-11-04 07:28:45.725947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.110 [2024-11-04 07:28:45.730333] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.110 [2024-11-04 07:28:45.730524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-11-04 07:28:45.730545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.110 [2024-11-04 07:28:45.734972] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.110 [2024-11-04 07:28:45.735176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-11-04 07:28:45.735218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.110 [2024-11-04 07:28:45.739869] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.110 [2024-11-04 07:28:45.740058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-11-04 07:28:45.740079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.110 [2024-11-04 07:28:45.744929] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.110 [2024-11-04 07:28:45.745052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-11-04 07:28:45.745073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.110 [2024-11-04 07:28:45.749738] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.110 [2024-11-04 07:28:45.749835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-11-04 07:28:45.749855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.110 [2024-11-04 07:28:45.754727] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.110 [2024-11-04 07:28:45.754861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-11-04 07:28:45.754893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.110 [2024-11-04 07:28:45.759650] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.110 [2024-11-04 07:28:45.759794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-11-04 07:28:45.759815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.110 [2024-11-04 07:28:45.764399] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.110 [2024-11-04 07:28:45.764606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-11-04 07:28:45.764626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.110 [2024-11-04 07:28:45.769220] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.110 [2024-11-04 07:28:45.769412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-11-04 07:28:45.769433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.110 [2024-11-04 07:28:45.773628] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.110 [2024-11-04 07:28:45.773890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-11-04 07:28:45.773921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.110 [2024-11-04 07:28:45.778345] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.110 [2024-11-04 07:28:45.778522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-11-04 07:28:45.778544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.110 [2024-11-04 07:28:45.782786] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.110 [2024-11-04 07:28:45.782948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-11-04 07:28:45.782970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.110 [2024-11-04 07:28:45.787435] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.110 [2024-11-04 07:28:45.787541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.110 [2024-11-04 07:28:45.787562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.110 [2024-11-04 07:28:45.792284] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.110 [2024-11-04 07:28:45.792380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.111 [2024-11-04 07:28:45.792401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.111 [2024-11-04 07:28:45.797321] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.111 [2024-11-04 07:28:45.797484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.111 [2024-11-04 07:28:45.797505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.111 [2024-11-04 07:28:45.802315] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.111 [2024-11-04 07:28:45.802434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.111 [2024-11-04 07:28:45.802455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.111 [2024-11-04 07:28:45.807644] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.111 [2024-11-04 07:28:45.807852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.111 [2024-11-04 07:28:45.807885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.111 [2024-11-04 07:28:45.812707] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.111 [2024-11-04 07:28:45.812971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.111 [2024-11-04 07:28:45.812992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.111 [2024-11-04 07:28:45.817705] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.111 [2024-11-04 07:28:45.817924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.111 [2024-11-04 07:28:45.817945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.111 [2024-11-04 07:28:45.822513] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.111 [2024-11-04 07:28:45.822672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.111 [2024-11-04 07:28:45.822692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.111 [2024-11-04 07:28:45.827382] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.111 [2024-11-04 07:28:45.827485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.111 [2024-11-04 07:28:45.827506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.111 [2024-11-04 07:28:45.832072] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.111 [2024-11-04 07:28:45.832166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.111 [2024-11-04 07:28:45.832187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.111 [2024-11-04 07:28:45.837093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.111 [2024-11-04 07:28:45.837240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.111 [2024-11-04 07:28:45.837274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.111 [2024-11-04 07:28:45.841923] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.111 [2024-11-04 07:28:45.842124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.111 [2024-11-04 07:28:45.842144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.111 [2024-11-04 07:28:45.846429] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.111 [2024-11-04 07:28:45.846633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.111 [2024-11-04 07:28:45.846654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.111 [2024-11-04 07:28:45.850929] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.111 [2024-11-04 07:28:45.851205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.111 [2024-11-04 07:28:45.851226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.111 [2024-11-04 07:28:45.855432] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.111 [2024-11-04 07:28:45.855588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.111 [2024-11-04 07:28:45.855609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.111 [2024-11-04 07:28:45.860096] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.111 [2024-11-04 07:28:45.860230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.111 [2024-11-04 07:28:45.860251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.111 [2024-11-04 07:28:45.864678] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.111 [2024-11-04 07:28:45.864778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.111 [2024-11-04 07:28:45.864799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.111 [2024-11-04 07:28:45.869202] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.111 [2024-11-04 07:28:45.869296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.111 [2024-11-04 07:28:45.869316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.111 [2024-11-04 07:28:45.873630] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.111 [2024-11-04 07:28:45.873780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.111 [2024-11-04 07:28:45.873801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.111 [2024-11-04 07:28:45.878330] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.111 [2024-11-04 07:28:45.878453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.111 [2024-11-04 07:28:45.878474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.111 [2024-11-04 07:28:45.883047] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.111 [2024-11-04 07:28:45.883252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.111 [2024-11-04 07:28:45.883273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.111 [2024-11-04 07:28:45.887481] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.111 [2024-11-04 07:28:45.887744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.111 [2024-11-04 07:28:45.887765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.111 [2024-11-04 07:28:45.892038] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.111 [2024-11-04 07:28:45.892199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.111 [2024-11-04 07:28:45.892220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.111 [2024-11-04 07:28:45.896638] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.111 [2024-11-04 07:28:45.896753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.111 [2024-11-04 07:28:45.896773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.111 [2024-11-04 07:28:45.901085] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.111 [2024-11-04 07:28:45.901186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.111 [2024-11-04 07:28:45.901207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.111 [2024-11-04 07:28:45.905381] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.111 [2024-11-04 07:28:45.905507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.111 [2024-11-04 07:28:45.905527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.111 [2024-11-04 07:28:45.909869] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.112 [2024-11-04 07:28:45.910040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.112 [2024-11-04 07:28:45.910061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.112 [2024-11-04 07:28:45.914591] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.112 [2024-11-04 07:28:45.914796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.112 [2024-11-04 07:28:45.914817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.112 [2024-11-04 07:28:45.919226] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.112 [2024-11-04 07:28:45.919429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.112 [2024-11-04 07:28:45.919450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.112 [2024-11-04 07:28:45.923593] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.112 [2024-11-04 07:28:45.923776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.112 [2024-11-04 07:28:45.923796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.112 [2024-11-04 07:28:45.927994] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.112 [2024-11-04 07:28:45.928124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.112 [2024-11-04 07:28:45.928144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.112 [2024-11-04 07:28:45.932610] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.112 [2024-11-04 07:28:45.932724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.112 [2024-11-04 07:28:45.932746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.112 [2024-11-04 07:28:45.937132] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.112 [2024-11-04 07:28:45.937247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.112 [2024-11-04 07:28:45.937267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.112 [2024-11-04 07:28:45.941545] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.112 [2024-11-04 07:28:45.941651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.112 [2024-11-04 07:28:45.941672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.112 [2024-11-04 07:28:45.946288] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.112 [2024-11-04 07:28:45.946489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.112 [2024-11-04 07:28:45.946509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.372 [2024-11-04 07:28:45.951430] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.372 [2024-11-04 07:28:45.951624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.372 [2024-11-04 07:28:45.951645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.372 [2024-11-04 07:28:45.956341] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.372 [2024-11-04 07:28:45.956518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.372 [2024-11-04 07:28:45.956539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.372 [2024-11-04 07:28:45.960763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.372 [2024-11-04 07:28:45.961058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.372 [2024-11-04 07:28:45.961087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.372 [2024-11-04 07:28:45.965271] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.372 [2024-11-04 07:28:45.965425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.372 [2024-11-04 07:28:45.965446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.372 [2024-11-04 07:28:45.969667] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.372 [2024-11-04 07:28:45.969782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.372 [2024-11-04 07:28:45.969803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.372 [2024-11-04 07:28:45.974024] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.372 [2024-11-04 07:28:45.974117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.372 [2024-11-04 07:28:45.974138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.372 [2024-11-04 07:28:45.978170] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.372 [2024-11-04 07:28:45.978253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.372 [2024-11-04 07:28:45.978273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.372 [2024-11-04 07:28:45.982468] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.372 [2024-11-04 07:28:45.982610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.372 [2024-11-04 07:28:45.982631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.372 [2024-11-04 07:28:45.986821] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.372 [2024-11-04 07:28:45.987062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.372 [2024-11-04 07:28:45.987084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.372 [2024-11-04 07:28:45.991395] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.372 [2024-11-04 07:28:45.991605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.372 [2024-11-04 07:28:45.991627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.372 [2024-11-04 07:28:45.995800] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.372 [2024-11-04 07:28:45.995987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.372 [2024-11-04 07:28:45.996008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.372 [2024-11-04 07:28:46.000134] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.372 [2024-11-04 07:28:46.000262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-11-04 07:28:46.000282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.373 [2024-11-04 07:28:46.004415] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.373 [2024-11-04 07:28:46.004529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-11-04 07:28:46.004549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.373 [2024-11-04 07:28:46.008712] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.373 [2024-11-04 07:28:46.008789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-11-04 07:28:46.008810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.373 [2024-11-04 07:28:46.012998] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.373 [2024-11-04 07:28:46.013098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-11-04 07:28:46.013118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.373 [2024-11-04 07:28:46.017340] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.373 [2024-11-04 07:28:46.017468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-11-04 07:28:46.017488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.373 [2024-11-04 07:28:46.021443] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.373 [2024-11-04 07:28:46.021622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-11-04 07:28:46.021642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.373 [2024-11-04 07:28:46.025831] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.373 [2024-11-04 07:28:46.026023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-11-04 07:28:46.026044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.373 [2024-11-04 07:28:46.030182] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.373 [2024-11-04 07:28:46.030375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-11-04 07:28:46.030395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.373 [2024-11-04 07:28:46.034675] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.373 [2024-11-04 07:28:46.034811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-11-04 07:28:46.034833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.373 [2024-11-04 07:28:46.039112] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.373 [2024-11-04 07:28:46.039228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-11-04 07:28:46.039250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.373 [2024-11-04 07:28:46.043381] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.373 [2024-11-04 07:28:46.043539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-11-04 07:28:46.043561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.373 [2024-11-04 07:28:46.047667] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.373 [2024-11-04 07:28:46.047757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-11-04 07:28:46.047779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.373 [2024-11-04 07:28:46.052038] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.373 [2024-11-04 07:28:46.052130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-11-04 07:28:46.052152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.373 [2024-11-04 07:28:46.056473] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.373 [2024-11-04 07:28:46.056653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-11-04 07:28:46.056675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.373 [2024-11-04 07:28:46.060846] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.373 [2024-11-04 07:28:46.061038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-11-04 07:28:46.061059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.373 [2024-11-04 07:28:46.065192] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.373 [2024-11-04 07:28:46.065362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-11-04 07:28:46.065383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.373 [2024-11-04 07:28:46.069455] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.373 [2024-11-04 07:28:46.069642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-11-04 07:28:46.069663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.373 [2024-11-04 07:28:46.073885] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.373 [2024-11-04 07:28:46.073969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-11-04 07:28:46.073990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.373 [2024-11-04 07:28:46.078208] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.373 [2024-11-04 07:28:46.078286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-11-04 07:28:46.078306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.373 [2024-11-04 07:28:46.082485] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.373 [2024-11-04 07:28:46.082571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-11-04 07:28:46.082604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.373 [2024-11-04 07:28:46.087158] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.373 [2024-11-04 07:28:46.087281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-11-04 07:28:46.087302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.373 [2024-11-04 07:28:46.091521] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.373 [2024-11-04 07:28:46.091677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-11-04 07:28:46.091699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.373 [2024-11-04 07:28:46.096053] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.373 [2024-11-04 07:28:46.096248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-11-04 07:28:46.096270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.373 [2024-11-04 07:28:46.100397] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.373 [2024-11-04 07:28:46.100652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-11-04 07:28:46.100680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.373 [2024-11-04 07:28:46.104624] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.373 [2024-11-04 07:28:46.104717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-11-04 07:28:46.104738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.373 [2024-11-04 07:28:46.108996] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.373 [2024-11-04 07:28:46.109085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-11-04 07:28:46.109106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.373 [2024-11-04 07:28:46.113264] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.373 [2024-11-04 07:28:46.113340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.373 [2024-11-04 07:28:46.113361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.373 [2024-11-04 07:28:46.117474] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.374 [2024-11-04 07:28:46.117550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.374 [2024-11-04 07:28:46.117571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.374 [2024-11-04 07:28:46.121831] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.374 [2024-11-04 07:28:46.121970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.374 [2024-11-04 07:28:46.121992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.374 [2024-11-04 07:28:46.126168] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.374 [2024-11-04 07:28:46.126339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.374 [2024-11-04 07:28:46.126360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.374 [2024-11-04 07:28:46.130679] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.374 [2024-11-04 07:28:46.130883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.374 [2024-11-04 07:28:46.130916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.374 [2024-11-04 07:28:46.135098] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.374 [2024-11-04 07:28:46.135292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.374 [2024-11-04 07:28:46.135313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.374 [2024-11-04 07:28:46.139379] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.374 [2024-11-04 07:28:46.139463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.374 [2024-11-04 07:28:46.139484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.374 [2024-11-04 07:28:46.143922] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.374 [2024-11-04 07:28:46.144014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.374 [2024-11-04 07:28:46.144035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.374 [2024-11-04 07:28:46.148266] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.374 [2024-11-04 07:28:46.148350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.374 [2024-11-04 07:28:46.148371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.374 [2024-11-04 07:28:46.152602] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.374 [2024-11-04 07:28:46.152678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.374 [2024-11-04 07:28:46.152699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.374 [2024-11-04 07:28:46.156980] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.374 [2024-11-04 07:28:46.157104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.374 [2024-11-04 07:28:46.157126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.374 [2024-11-04 07:28:46.161295] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.374 [2024-11-04 07:28:46.161440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.374 [2024-11-04 07:28:46.161461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.374 [2024-11-04 07:28:46.165722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.374 [2024-11-04 07:28:46.165894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.374 [2024-11-04 07:28:46.165915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.374 [2024-11-04 07:28:46.170087] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.374 [2024-11-04 07:28:46.170329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.374 [2024-11-04 07:28:46.170350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.374 [2024-11-04 07:28:46.174389] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.374 [2024-11-04 07:28:46.174505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.374 [2024-11-04 07:28:46.174526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.374 [2024-11-04 07:28:46.178867] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.374 [2024-11-04 07:28:46.178998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.374 [2024-11-04 07:28:46.179018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.374 [2024-11-04 07:28:46.183347] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.374 [2024-11-04 07:28:46.183459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.374 [2024-11-04 07:28:46.183480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.374 [2024-11-04 07:28:46.187705] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.374 [2024-11-04 07:28:46.187799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.374 [2024-11-04 07:28:46.187820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.374 [2024-11-04 07:28:46.192138] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.374 [2024-11-04 07:28:46.192280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.374 [2024-11-04 07:28:46.192301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.374 [2024-11-04 07:28:46.196568] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.374 [2024-11-04 07:28:46.196760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.374 [2024-11-04 07:28:46.196781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.374 [2024-11-04 07:28:46.200942] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.374 [2024-11-04 07:28:46.201119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.374 [2024-11-04 07:28:46.201140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.374 [2024-11-04 07:28:46.205264] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.374 [2024-11-04 07:28:46.205532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.374 [2024-11-04 07:28:46.205559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.374 [2024-11-04 07:28:46.209817] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.374 [2024-11-04 07:28:46.210042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.374 [2024-11-04 07:28:46.210064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.635 [2024-11-04 07:28:46.214360] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.635 [2024-11-04 07:28:46.214447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-11-04 07:28:46.214468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.635 [2024-11-04 07:28:46.219117] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.635 [2024-11-04 07:28:46.219212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-11-04 07:28:46.219232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.635 [2024-11-04 07:28:46.223385] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.635 [2024-11-04 07:28:46.223496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-11-04 07:28:46.223517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.635 [2024-11-04 07:28:46.227799] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.635 [2024-11-04 07:28:46.227943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-11-04 07:28:46.227965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.635 [2024-11-04 07:28:46.232134] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.635 [2024-11-04 07:28:46.232309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-11-04 07:28:46.232330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.635 [2024-11-04 07:28:46.236731] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.635 [2024-11-04 07:28:46.236920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-11-04 07:28:46.236942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.635 [2024-11-04 07:28:46.241091] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.635 [2024-11-04 07:28:46.241328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-11-04 07:28:46.241348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.635 [2024-11-04 07:28:46.245597] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.635 [2024-11-04 07:28:46.245786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-11-04 07:28:46.245807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.635 [2024-11-04 07:28:46.249891] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.635 [2024-11-04 07:28:46.250020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-11-04 07:28:46.250041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.635 [2024-11-04 07:28:46.254277] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.635 [2024-11-04 07:28:46.254360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-11-04 07:28:46.254380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.635 [2024-11-04 07:28:46.258613] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.635 [2024-11-04 07:28:46.258757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-11-04 07:28:46.258780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.635 [2024-11-04 07:28:46.263121] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.635 [2024-11-04 07:28:46.263266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-11-04 07:28:46.263287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.635 [2024-11-04 07:28:46.267365] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.635 [2024-11-04 07:28:46.267551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-11-04 07:28:46.267571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.635 [2024-11-04 07:28:46.271803] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.635 [2024-11-04 07:28:46.271992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-11-04 07:28:46.272013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.635 [2024-11-04 07:28:46.276096] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.635 [2024-11-04 07:28:46.276326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-11-04 07:28:46.276358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.635 [2024-11-04 07:28:46.280497] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.635 [2024-11-04 07:28:46.280687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-11-04 07:28:46.280708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.635 [2024-11-04 07:28:46.284773] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.635 [2024-11-04 07:28:46.284892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-11-04 07:28:46.284913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.635 [2024-11-04 07:28:46.289152] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.635 [2024-11-04 07:28:46.289250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-11-04 07:28:46.289271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.635 [2024-11-04 07:28:46.293464] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.635 [2024-11-04 07:28:46.293576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-11-04 07:28:46.293596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.635 [2024-11-04 07:28:46.297737] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.635 [2024-11-04 07:28:46.297901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-11-04 07:28:46.297922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.635 [2024-11-04 07:28:46.302079] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.635 [2024-11-04 07:28:46.302258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.635 [2024-11-04 07:28:46.302279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.635 [2024-11-04 07:28:46.306523] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.636 [2024-11-04 07:28:46.306721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-11-04 07:28:46.306742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.636 [2024-11-04 07:28:46.310783] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.636 [2024-11-04 07:28:46.311018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-11-04 07:28:46.311039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.636 [2024-11-04 07:28:46.315130] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.636 [2024-11-04 07:28:46.315225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-11-04 07:28:46.315246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.636 [2024-11-04 07:28:46.319514] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.636 [2024-11-04 07:28:46.319632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-11-04 07:28:46.319653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.636 [2024-11-04 07:28:46.323941] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.636 [2024-11-04 07:28:46.324042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-11-04 07:28:46.324064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.636 [2024-11-04 07:28:46.328340] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.636 [2024-11-04 07:28:46.328418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-11-04 07:28:46.328440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.636 [2024-11-04 07:28:46.332748] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.636 [2024-11-04 07:28:46.332894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-11-04 07:28:46.332915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.636 [2024-11-04 07:28:46.337189] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.636 [2024-11-04 07:28:46.337394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-11-04 07:28:46.337415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.636 [2024-11-04 07:28:46.341562] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.636 [2024-11-04 07:28:46.341766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-11-04 07:28:46.341787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.636 [2024-11-04 07:28:46.345974] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.636 [2024-11-04 07:28:46.346241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-11-04 07:28:46.346284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.636 [2024-11-04 07:28:46.350290] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.636 [2024-11-04 07:28:46.350412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-11-04 07:28:46.350432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.636 [2024-11-04 07:28:46.354706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.636 [2024-11-04 07:28:46.354842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-11-04 07:28:46.354864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.636 [2024-11-04 07:28:46.359158] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.636 [2024-11-04 07:28:46.359253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-11-04 07:28:46.359273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.636 [2024-11-04 07:28:46.363570] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.636 [2024-11-04 07:28:46.363647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-11-04 07:28:46.363668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.636 [2024-11-04 07:28:46.367924] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.636 [2024-11-04 07:28:46.368051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-11-04 07:28:46.368072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.636 [2024-11-04 07:28:46.372148] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.636 [2024-11-04 07:28:46.372289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-11-04 07:28:46.372309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.636 [2024-11-04 07:28:46.376706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.636 [2024-11-04 07:28:46.376906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-11-04 07:28:46.376927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.636 [2024-11-04 07:28:46.381053] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.636 [2024-11-04 07:28:46.381237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-11-04 07:28:46.381258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.636 [2024-11-04 07:28:46.385288] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.636 [2024-11-04 07:28:46.385376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-11-04 07:28:46.385397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.636 [2024-11-04 07:28:46.389660] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.636 [2024-11-04 07:28:46.389742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-11-04 07:28:46.389762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.636 [2024-11-04 07:28:46.394082] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.636 [2024-11-04 07:28:46.394208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-11-04 07:28:46.394228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.636 [2024-11-04 07:28:46.398395] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.636 [2024-11-04 07:28:46.398508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-11-04 07:28:46.398528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.636 [2024-11-04 07:28:46.403047] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.636 [2024-11-04 07:28:46.403186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-11-04 07:28:46.403206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.636 [2024-11-04 07:28:46.407473] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.636 [2024-11-04 07:28:46.407694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-11-04 07:28:46.407726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.636 [2024-11-04 07:28:46.411959] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.636 [2024-11-04 07:28:46.412133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-11-04 07:28:46.412153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.636 [2024-11-04 07:28:46.416273] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.636 [2024-11-04 07:28:46.416545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-11-04 07:28:46.416574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.636 [2024-11-04 07:28:46.420624] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.636 [2024-11-04 07:28:46.420788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.636 [2024-11-04 07:28:46.420809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.636 [2024-11-04 07:28:46.424980] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.637 [2024-11-04 07:28:46.425064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.637 [2024-11-04 07:28:46.425085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.637 [2024-11-04 07:28:46.429380] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.637 [2024-11-04 07:28:46.429461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.637 [2024-11-04 07:28:46.429482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.637 [2024-11-04 07:28:46.433780] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.637 [2024-11-04 07:28:46.433891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.637 [2024-11-04 07:28:46.433912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.637 [2024-11-04 07:28:46.438160] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.637 [2024-11-04 07:28:46.438291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.637 [2024-11-04 07:28:46.438312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.637 [2024-11-04 07:28:46.442389] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.637 [2024-11-04 07:28:46.442587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.637 [2024-11-04 07:28:46.442618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.637 [2024-11-04 07:28:46.446882] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.637 [2024-11-04 07:28:46.447102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.637 [2024-11-04 07:28:46.447122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.637 [2024-11-04 07:28:46.451276] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.637 [2024-11-04 07:28:46.451567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.637 [2024-11-04 07:28:46.451598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.637 [2024-11-04 07:28:46.455692] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.637 [2024-11-04 07:28:46.455821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.637 [2024-11-04 07:28:46.455841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.637 [2024-11-04 07:28:46.460076] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.637 [2024-11-04 07:28:46.460153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.637 [2024-11-04 07:28:46.460173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.637 [2024-11-04 07:28:46.464454] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.637 [2024-11-04 07:28:46.464540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.637 [2024-11-04 07:28:46.464561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.637 [2024-11-04 07:28:46.468780] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.637 [2024-11-04 07:28:46.468885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.637 [2024-11-04 07:28:46.468907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.897 [2024-11-04 07:28:46.473692] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.897 [2024-11-04 07:28:46.473835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.897 [2024-11-04 07:28:46.473856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.897 [2024-11-04 07:28:46.478194] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.897 [2024-11-04 07:28:46.478370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.897 [2024-11-04 07:28:46.478402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.897 [2024-11-04 07:28:46.482849] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.897 [2024-11-04 07:28:46.483106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.897 [2024-11-04 07:28:46.483132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.897 [2024-11-04 07:28:46.487284] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.897 [2024-11-04 07:28:46.487481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.897 [2024-11-04 07:28:46.487502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.897 [2024-11-04 07:28:46.491566] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.897 [2024-11-04 07:28:46.491643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.897 [2024-11-04 07:28:46.491663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.897 [2024-11-04 07:28:46.496029] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.897 [2024-11-04 07:28:46.496124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.897 [2024-11-04 07:28:46.496144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.897 [2024-11-04 07:28:46.500323] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.897 [2024-11-04 07:28:46.500400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.897 [2024-11-04 07:28:46.500420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.897 [2024-11-04 07:28:46.504784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.897 [2024-11-04 07:28:46.504884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.897 [2024-11-04 07:28:46.504905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.897 [2024-11-04 07:28:46.509187] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.897 [2024-11-04 07:28:46.509314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.897 [2024-11-04 07:28:46.509334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.897 [2024-11-04 07:28:46.513590] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.897 [2024-11-04 07:28:46.513839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-11-04 07:28:46.513866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.898 [2024-11-04 07:28:46.517931] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.898 [2024-11-04 07:28:46.518117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-11-04 07:28:46.518139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.898 [2024-11-04 07:28:46.522254] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.898 [2024-11-04 07:28:46.522597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-11-04 07:28:46.522632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.898 [2024-11-04 07:28:46.526739] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.898 [2024-11-04 07:28:46.526836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-11-04 07:28:46.526856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.898 [2024-11-04 07:28:46.531266] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.898 [2024-11-04 07:28:46.531382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-11-04 07:28:46.531402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.898 [2024-11-04 07:28:46.535523] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.898 [2024-11-04 07:28:46.535614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-11-04 07:28:46.535635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.898 [2024-11-04 07:28:46.539952] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.898 [2024-11-04 07:28:46.540030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-11-04 07:28:46.540050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.898 [2024-11-04 07:28:46.544384] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.898 [2024-11-04 07:28:46.544506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-11-04 07:28:46.544527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.898 [2024-11-04 07:28:46.548685] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.898 [2024-11-04 07:28:46.548857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-11-04 07:28:46.548890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.898 [2024-11-04 07:28:46.553093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.898 [2024-11-04 07:28:46.553284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-11-04 07:28:46.553305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.898 [2024-11-04 07:28:46.557346] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.898 [2024-11-04 07:28:46.557500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-11-04 07:28:46.557520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.898 [2024-11-04 07:28:46.561654] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.898 [2024-11-04 07:28:46.561745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-11-04 07:28:46.561766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.898 [2024-11-04 07:28:46.566035] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.898 [2024-11-04 07:28:46.566211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-11-04 07:28:46.566232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.898 [2024-11-04 07:28:46.570262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.898 [2024-11-04 07:28:46.570353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-11-04 07:28:46.570373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.898 [2024-11-04 07:28:46.574513] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.898 [2024-11-04 07:28:46.574618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-11-04 07:28:46.574638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.898 [2024-11-04 07:28:46.578984] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.898 [2024-11-04 07:28:46.579143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-11-04 07:28:46.579164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.898 [2024-11-04 07:28:46.583475] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.898 [2024-11-04 07:28:46.583629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-11-04 07:28:46.583650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.898 [2024-11-04 07:28:46.587748] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.898 [2024-11-04 07:28:46.587940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-11-04 07:28:46.587960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.898 [2024-11-04 07:28:46.592090] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.898 [2024-11-04 07:28:46.592263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-11-04 07:28:46.592284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.898 [2024-11-04 07:28:46.596355] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.898 [2024-11-04 07:28:46.596435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-11-04 07:28:46.596455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.898 [2024-11-04 07:28:46.600758] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.898 [2024-11-04 07:28:46.600840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-11-04 07:28:46.600861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.898 [2024-11-04 07:28:46.605071] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.898 [2024-11-04 07:28:46.605162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-11-04 07:28:46.605183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.898 [2024-11-04 07:28:46.609330] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.898 [2024-11-04 07:28:46.609411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-11-04 07:28:46.609432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.898 [2024-11-04 07:28:46.613689] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.898 [2024-11-04 07:28:46.613812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-11-04 07:28:46.613832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.898 [2024-11-04 07:28:46.617965] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.898 [2024-11-04 07:28:46.618142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-11-04 07:28:46.618163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.898 [2024-11-04 07:28:46.622329] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.898 [2024-11-04 07:28:46.622500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-11-04 07:28:46.622521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.898 [2024-11-04 07:28:46.626682] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.898 [2024-11-04 07:28:46.626829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.898 [2024-11-04 07:28:46.626850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.898 [2024-11-04 07:28:46.631102] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.899 [2024-11-04 07:28:46.631208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-11-04 07:28:46.631228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.899 [2024-11-04 07:28:46.635528] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.899 [2024-11-04 07:28:46.635643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-11-04 07:28:46.635664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.899 [2024-11-04 07:28:46.639846] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.899 [2024-11-04 07:28:46.639951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-11-04 07:28:46.639972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.899 [2024-11-04 07:28:46.644312] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.899 [2024-11-04 07:28:46.644424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-11-04 07:28:46.644445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.899 [2024-11-04 07:28:46.648594] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.899 [2024-11-04 07:28:46.648724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-11-04 07:28:46.648744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.899 [2024-11-04 07:28:46.652959] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.899 [2024-11-04 07:28:46.653128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-11-04 07:28:46.653149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.899 [2024-11-04 07:28:46.657396] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.899 [2024-11-04 07:28:46.657569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-11-04 07:28:46.657590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.899 [2024-11-04 07:28:46.661744] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.899 [2024-11-04 07:28:46.661965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-11-04 07:28:46.661986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.899 [2024-11-04 07:28:46.666123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.899 [2024-11-04 07:28:46.666201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-11-04 07:28:46.666222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.899 [2024-11-04 07:28:46.670510] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.899 [2024-11-04 07:28:46.670662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-11-04 07:28:46.670683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.899 [2024-11-04 07:28:46.674865] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.899 [2024-11-04 07:28:46.674993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-11-04 07:28:46.675021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.899 [2024-11-04 07:28:46.679250] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.899 [2024-11-04 07:28:46.679325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-11-04 07:28:46.679345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.899 [2024-11-04 07:28:46.683582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.899 [2024-11-04 07:28:46.683704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-11-04 07:28:46.683725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.899 [2024-11-04 07:28:46.687844] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.899 [2024-11-04 07:28:46.688095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-11-04 07:28:46.688122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.899 [2024-11-04 07:28:46.692146] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.899 [2024-11-04 07:28:46.692322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-11-04 07:28:46.692342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.899 [2024-11-04 07:28:46.696475] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.899 [2024-11-04 07:28:46.696703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-11-04 07:28:46.696723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.899 [2024-11-04 07:28:46.700895] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.899 [2024-11-04 07:28:46.701053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-11-04 07:28:46.701074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.899 [2024-11-04 07:28:46.705377] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.899 [2024-11-04 07:28:46.705477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-11-04 07:28:46.705497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.899 [2024-11-04 07:28:46.709770] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.899 [2024-11-04 07:28:46.709869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-11-04 07:28:46.709901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.899 [2024-11-04 07:28:46.714096] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.899 [2024-11-04 07:28:46.714178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-11-04 07:28:46.714198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.899 [2024-11-04 07:28:46.718411] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.899 [2024-11-04 07:28:46.718542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-11-04 07:28:46.718572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.899 [2024-11-04 07:28:46.722836] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.899 [2024-11-04 07:28:46.723047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-11-04 07:28:46.723068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.899 [2024-11-04 07:28:46.727268] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.899 [2024-11-04 07:28:46.727444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-11-04 07:28:46.727465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.899 [2024-11-04 07:28:46.731621] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:44.899 [2024-11-04 07:28:46.731766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.899 [2024-11-04 07:28:46.731788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.161 [2024-11-04 07:28:46.736240] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.161 [2024-11-04 07:28:46.736401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.161 [2024-11-04 07:28:46.736423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.161 [2024-11-04 07:28:46.740703] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.161 [2024-11-04 07:28:46.740796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.161 [2024-11-04 07:28:46.740817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.161 [2024-11-04 07:28:46.745419] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.161 [2024-11-04 07:28:46.745532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.161 [2024-11-04 07:28:46.745554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.161 [2024-11-04 07:28:46.749743] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.161 [2024-11-04 07:28:46.749820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.161 [2024-11-04 07:28:46.749841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.161 [2024-11-04 07:28:46.754175] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.161 [2024-11-04 07:28:46.754298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.161 [2024-11-04 07:28:46.754320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.161 [2024-11-04 07:28:46.758402] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.161 [2024-11-04 07:28:46.758587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.161 [2024-11-04 07:28:46.758619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.161 [2024-11-04 07:28:46.763106] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.161 [2024-11-04 07:28:46.763300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.161 [2024-11-04 07:28:46.763321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.161 [2024-11-04 07:28:46.767522] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.161 [2024-11-04 07:28:46.767680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.161 [2024-11-04 07:28:46.767701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.161 [2024-11-04 07:28:46.771812] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.161 [2024-11-04 07:28:46.771917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.161 [2024-11-04 07:28:46.771938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.161 [2024-11-04 07:28:46.776264] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.161 [2024-11-04 07:28:46.776379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.161 [2024-11-04 07:28:46.776399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.162 [2024-11-04 07:28:46.780545] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.162 [2024-11-04 07:28:46.780630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-11-04 07:28:46.780651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.162 [2024-11-04 07:28:46.784934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.162 [2024-11-04 07:28:46.785012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-11-04 07:28:46.785033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.162 [2024-11-04 07:28:46.789316] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.162 [2024-11-04 07:28:46.789440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-11-04 07:28:46.789460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.162 [2024-11-04 07:28:46.793687] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.162 [2024-11-04 07:28:46.793815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-11-04 07:28:46.793835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.162 [2024-11-04 07:28:46.798120] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.162 [2024-11-04 07:28:46.798293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-11-04 07:28:46.798314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.162 [2024-11-04 07:28:46.802421] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.162 [2024-11-04 07:28:46.802690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-11-04 07:28:46.802711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.162 [2024-11-04 07:28:46.806758] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.162 [2024-11-04 07:28:46.806885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-11-04 07:28:46.806916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.162 [2024-11-04 07:28:46.811300] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.162 [2024-11-04 07:28:46.811401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-11-04 07:28:46.811422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.162 [2024-11-04 07:28:46.815989] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.162 [2024-11-04 07:28:46.816085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-11-04 07:28:46.816105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.162 [2024-11-04 07:28:46.820632] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.162 [2024-11-04 07:28:46.820734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-11-04 07:28:46.820755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.162 [2024-11-04 07:28:46.825646] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.162 [2024-11-04 07:28:46.825804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-11-04 07:28:46.825826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.162 [2024-11-04 07:28:46.830799] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.162 [2024-11-04 07:28:46.831003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-11-04 07:28:46.831024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.162 [2024-11-04 07:28:46.835789] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.162 [2024-11-04 07:28:46.835963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-11-04 07:28:46.835984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.162 [2024-11-04 07:28:46.840626] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.162 [2024-11-04 07:28:46.840890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-11-04 07:28:46.840922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.162 [2024-11-04 07:28:46.845388] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.162 [2024-11-04 07:28:46.845469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-11-04 07:28:46.845490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.162 [2024-11-04 07:28:46.850196] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.162 [2024-11-04 07:28:46.850398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-11-04 07:28:46.850419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.162 [2024-11-04 07:28:46.854836] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.162 [2024-11-04 07:28:46.854984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-11-04 07:28:46.855005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.162 [2024-11-04 07:28:46.859484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.162 [2024-11-04 07:28:46.859584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-11-04 07:28:46.859605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.162 [2024-11-04 07:28:46.863861] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.162 [2024-11-04 07:28:46.864014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-11-04 07:28:46.864035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.162 [2024-11-04 07:28:46.868220] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.162 [2024-11-04 07:28:46.868405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-11-04 07:28:46.868426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.162 [2024-11-04 07:28:46.872713] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.162 [2024-11-04 07:28:46.872902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-11-04 07:28:46.872923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.162 [2024-11-04 07:28:46.876991] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.162 [2024-11-04 07:28:46.877253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-11-04 07:28:46.877286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.162 [2024-11-04 07:28:46.881262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.162 [2024-11-04 07:28:46.881417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-11-04 07:28:46.881438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.162 [2024-11-04 07:28:46.885623] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.162 [2024-11-04 07:28:46.885733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-11-04 07:28:46.885754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.162 [2024-11-04 07:28:46.889985] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.162 [2024-11-04 07:28:46.890078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-11-04 07:28:46.890098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.162 [2024-11-04 07:28:46.894295] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.162 [2024-11-04 07:28:46.894372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-11-04 07:28:46.894393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.162 [2024-11-04 07:28:46.898651] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.162 [2024-11-04 07:28:46.898782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.162 [2024-11-04 07:28:46.898803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.163 [2024-11-04 07:28:46.903141] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.163 [2024-11-04 07:28:46.903321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.163 [2024-11-04 07:28:46.903342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.163 [2024-11-04 07:28:46.907564] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.163 [2024-11-04 07:28:46.907739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.163 [2024-11-04 07:28:46.907760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.163 [2024-11-04 07:28:46.911711] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.163 [2024-11-04 07:28:46.911840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.163 [2024-11-04 07:28:46.911861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.163 [2024-11-04 07:28:46.916045] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.163 [2024-11-04 07:28:46.916139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.163 [2024-11-04 07:28:46.916159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.163 [2024-11-04 07:28:46.920385] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.163 [2024-11-04 07:28:46.920546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.163 [2024-11-04 07:28:46.920566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.163 [2024-11-04 07:28:46.924705] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.163 [2024-11-04 07:28:46.924796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.163 [2024-11-04 07:28:46.924816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.163 [2024-11-04 07:28:46.929077] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.163 [2024-11-04 07:28:46.929174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.163 [2024-11-04 07:28:46.929195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.163 [2024-11-04 07:28:46.933485] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.163 [2024-11-04 07:28:46.933634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.163 [2024-11-04 07:28:46.933654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.163 [2024-11-04 07:28:46.937859] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.163 [2024-11-04 07:28:46.938072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.163 [2024-11-04 07:28:46.938092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.163 [2024-11-04 07:28:46.942236] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.163 [2024-11-04 07:28:46.942439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.163 [2024-11-04 07:28:46.942460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.163 [2024-11-04 07:28:46.946650] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.163 [2024-11-04 07:28:46.946774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.163 [2024-11-04 07:28:46.946793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.163 [2024-11-04 07:28:46.951152] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.163 [2024-11-04 07:28:46.951263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.163 [2024-11-04 07:28:46.951283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.163 [2024-11-04 07:28:46.955751] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.163 [2024-11-04 07:28:46.955935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.163 [2024-11-04 07:28:46.955957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.163 [2024-11-04 07:28:46.960533] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.163 [2024-11-04 07:28:46.960659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.163 [2024-11-04 07:28:46.960679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.163 [2024-11-04 07:28:46.965592] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.163 [2024-11-04 07:28:46.965707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.163 [2024-11-04 07:28:46.965729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.163 [2024-11-04 07:28:46.970491] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.163 [2024-11-04 07:28:46.970666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.163 [2024-11-04 07:28:46.970687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.163 [2024-11-04 07:28:46.975444] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.163 [2024-11-04 07:28:46.975687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.163 [2024-11-04 07:28:46.975719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.163 [2024-11-04 07:28:46.980497] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.163 [2024-11-04 07:28:46.980666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.163 [2024-11-04 07:28:46.980687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.163 [2024-11-04 07:28:46.985270] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.163 [2024-11-04 07:28:46.985398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.163 [2024-11-04 07:28:46.985418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.163 [2024-11-04 07:28:46.989983] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.163 [2024-11-04 07:28:46.990112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.163 [2024-11-04 07:28:46.990133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.163 [2024-11-04 07:28:46.994782] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.163 [2024-11-04 07:28:46.994941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.163 [2024-11-04 07:28:46.994962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.424 [2024-11-04 07:28:47.000049] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.424 [2024-11-04 07:28:47.000199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-11-04 07:28:47.000220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.424 [2024-11-04 07:28:47.004596] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.424 [2024-11-04 07:28:47.004718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-11-04 07:28:47.004738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.424 [2024-11-04 07:28:47.009329] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.424 [2024-11-04 07:28:47.009493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-11-04 07:28:47.009514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.424 [2024-11-04 07:28:47.013652] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.424 [2024-11-04 07:28:47.013899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-11-04 07:28:47.013932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.424 [2024-11-04 07:28:47.018359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.424 [2024-11-04 07:28:47.018542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-11-04 07:28:47.018571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.424 [2024-11-04 07:28:47.022836] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.424 [2024-11-04 07:28:47.022992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-11-04 07:28:47.023021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.424 [2024-11-04 07:28:47.027371] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.424 [2024-11-04 07:28:47.027477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-11-04 07:28:47.027498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.424 [2024-11-04 07:28:47.031769] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.424 [2024-11-04 07:28:47.031926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-11-04 07:28:47.031947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.424 [2024-11-04 07:28:47.036560] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.424 [2024-11-04 07:28:47.036708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-11-04 07:28:47.036728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.424 [2024-11-04 07:28:47.040986] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.424 [2024-11-04 07:28:47.041094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-11-04 07:28:47.041115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.424 [2024-11-04 07:28:47.045507] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.424 [2024-11-04 07:28:47.045667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-11-04 07:28:47.045688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.424 [2024-11-04 07:28:47.049979] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.424 [2024-11-04 07:28:47.050200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-11-04 07:28:47.050220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.424 [2024-11-04 07:28:47.054520] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.424 [2024-11-04 07:28:47.054785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-11-04 07:28:47.054812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.424 [2024-11-04 07:28:47.059104] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.424 [2024-11-04 07:28:47.059222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-11-04 07:28:47.059254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.424 [2024-11-04 07:28:47.063414] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.424 [2024-11-04 07:28:47.063601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-11-04 07:28:47.063621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.424 [2024-11-04 07:28:47.067968] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.424 [2024-11-04 07:28:47.068115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-11-04 07:28:47.068136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.424 [2024-11-04 07:28:47.072615] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.424 [2024-11-04 07:28:47.072706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-11-04 07:28:47.072727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.424 [2024-11-04 07:28:47.077171] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.424 [2024-11-04 07:28:47.077267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-11-04 07:28:47.077287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.424 [2024-11-04 07:28:47.081625] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.424 [2024-11-04 07:28:47.081789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-11-04 07:28:47.081810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.424 [2024-11-04 07:28:47.086060] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.424 [2024-11-04 07:28:47.086358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-11-04 07:28:47.086389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.424 [2024-11-04 07:28:47.090806] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.424 [2024-11-04 07:28:47.091009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-11-04 07:28:47.091030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.424 [2024-11-04 07:28:47.095353] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.424 [2024-11-04 07:28:47.095482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-11-04 07:28:47.095502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.424 [2024-11-04 07:28:47.099891] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.424 [2024-11-04 07:28:47.100034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-11-04 07:28:47.100055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.424 [2024-11-04 07:28:47.104285] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.424 [2024-11-04 07:28:47.104427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.424 [2024-11-04 07:28:47.104447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.425 [2024-11-04 07:28:47.108921] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.425 [2024-11-04 07:28:47.109078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-11-04 07:28:47.109099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.425 [2024-11-04 07:28:47.113348] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.425 [2024-11-04 07:28:47.113438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-11-04 07:28:47.113459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.425 [2024-11-04 07:28:47.117740] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.425 [2024-11-04 07:28:47.117917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-11-04 07:28:47.117938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.425 [2024-11-04 07:28:47.122219] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.425 [2024-11-04 07:28:47.122483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-11-04 07:28:47.122516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.425 [2024-11-04 07:28:47.127063] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.425 [2024-11-04 07:28:47.127248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-11-04 07:28:47.127275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.425 [2024-11-04 07:28:47.131416] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.425 [2024-11-04 07:28:47.131535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-11-04 07:28:47.131556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.425 [2024-11-04 07:28:47.135977] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.425 [2024-11-04 07:28:47.136091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-11-04 07:28:47.136111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.425 [2024-11-04 07:28:47.140591] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.425 [2024-11-04 07:28:47.140748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-11-04 07:28:47.140769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.425 [2024-11-04 07:28:47.145246] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.425 [2024-11-04 07:28:47.145371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-11-04 07:28:47.145393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.425 [2024-11-04 07:28:47.149572] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.425 [2024-11-04 07:28:47.149663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-11-04 07:28:47.149684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.425 [2024-11-04 07:28:47.154349] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.425 [2024-11-04 07:28:47.154529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-11-04 07:28:47.154551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.425 [2024-11-04 07:28:47.158821] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.425 [2024-11-04 07:28:47.159051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-11-04 07:28:47.159072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.425 [2024-11-04 07:28:47.163432] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.425 [2024-11-04 07:28:47.163586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-11-04 07:28:47.163607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.425 [2024-11-04 07:28:47.167878] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.425 [2024-11-04 07:28:47.168014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-11-04 07:28:47.168035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.425 [2024-11-04 07:28:47.172213] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.425 [2024-11-04 07:28:47.172381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-11-04 07:28:47.172401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.425 [2024-11-04 07:28:47.176468] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.425 [2024-11-04 07:28:47.176648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-11-04 07:28:47.176668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.425 [2024-11-04 07:28:47.180794] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.425 [2024-11-04 07:28:47.180885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-11-04 07:28:47.180906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.425 [2024-11-04 07:28:47.184979] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.425 [2024-11-04 07:28:47.185087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-11-04 07:28:47.185110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.425 [2024-11-04 07:28:47.189382] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.425 [2024-11-04 07:28:47.189528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-11-04 07:28:47.189549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.425 [2024-11-04 07:28:47.193662] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.425 [2024-11-04 07:28:47.193935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-11-04 07:28:47.193963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.425 [2024-11-04 07:28:47.198176] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.425 [2024-11-04 07:28:47.198360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-11-04 07:28:47.198381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.425 [2024-11-04 07:28:47.202488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.425 [2024-11-04 07:28:47.202636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-11-04 07:28:47.202657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.425 [2024-11-04 07:28:47.206748] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.425 [2024-11-04 07:28:47.206864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-11-04 07:28:47.206896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.425 [2024-11-04 07:28:47.211136] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.425 [2024-11-04 07:28:47.211278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-11-04 07:28:47.211300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.425 [2024-11-04 07:28:47.215523] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.425 [2024-11-04 07:28:47.215606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-11-04 07:28:47.215627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.425 [2024-11-04 07:28:47.219744] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.425 [2024-11-04 07:28:47.219822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-11-04 07:28:47.219843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.425 [2024-11-04 07:28:47.224115] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.425 [2024-11-04 07:28:47.224280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.425 [2024-11-04 07:28:47.224301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.425 [2024-11-04 07:28:47.228382] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.426 [2024-11-04 07:28:47.228564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.426 [2024-11-04 07:28:47.228585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.426 [2024-11-04 07:28:47.232763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.426 [2024-11-04 07:28:47.232964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.426 [2024-11-04 07:28:47.232985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.426 [2024-11-04 07:28:47.237083] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.426 [2024-11-04 07:28:47.237212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.426 [2024-11-04 07:28:47.237233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.426 [2024-11-04 07:28:47.241318] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.426 [2024-11-04 07:28:47.241403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.426 [2024-11-04 07:28:47.241424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.426 [2024-11-04 07:28:47.245627] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.426 [2024-11-04 07:28:47.245768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.426 [2024-11-04 07:28:47.245788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.426 [2024-11-04 07:28:47.249974] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.426 [2024-11-04 07:28:47.250082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.426 [2024-11-04 07:28:47.250103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.426 [2024-11-04 07:28:47.254241] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.426 [2024-11-04 07:28:47.254322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.426 [2024-11-04 07:28:47.254343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.426 [2024-11-04 07:28:47.258803] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.426 [2024-11-04 07:28:47.258970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.426 [2024-11-04 07:28:47.258992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.685 [2024-11-04 07:28:47.263584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.685 [2024-11-04 07:28:47.263785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.685 [2024-11-04 07:28:47.263805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.685 [2024-11-04 07:28:47.268223] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.685 [2024-11-04 07:28:47.268422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.685 [2024-11-04 07:28:47.268443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.685 [2024-11-04 07:28:47.272656] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.685 [2024-11-04 07:28:47.272752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.685 [2024-11-04 07:28:47.272773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.685 [2024-11-04 07:28:47.276945] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.685 [2024-11-04 07:28:47.277028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.685 [2024-11-04 07:28:47.277048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.685 [2024-11-04 07:28:47.281356] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.685 [2024-11-04 07:28:47.281508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.686 [2024-11-04 07:28:47.281529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.686 [2024-11-04 07:28:47.285852] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.686 [2024-11-04 07:28:47.285960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.686 [2024-11-04 07:28:47.285980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.686 [2024-11-04 07:28:47.290143] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.686 [2024-11-04 07:28:47.290220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.686 [2024-11-04 07:28:47.290240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.686 [2024-11-04 07:28:47.294536] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.686 [2024-11-04 07:28:47.294742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.686 [2024-11-04 07:28:47.294762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.686 [2024-11-04 07:28:47.299032] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.686 [2024-11-04 07:28:47.299291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.686 [2024-11-04 07:28:47.299323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.686 [2024-11-04 07:28:47.303366] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.686 [2024-11-04 07:28:47.303562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.686 [2024-11-04 07:28:47.303583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.686 [2024-11-04 07:28:47.307576] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.686 [2024-11-04 07:28:47.307736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.686 [2024-11-04 07:28:47.307757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.686 [2024-11-04 07:28:47.311838] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.686 [2024-11-04 07:28:47.311958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.686 [2024-11-04 07:28:47.311979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.686 [2024-11-04 07:28:47.316099] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.686 [2024-11-04 07:28:47.316245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.686 [2024-11-04 07:28:47.316278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.686 [2024-11-04 07:28:47.320366] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.686 [2024-11-04 07:28:47.320447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.686 [2024-11-04 07:28:47.320467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.686 [2024-11-04 07:28:47.324619] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.686 [2024-11-04 07:28:47.324731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.686 [2024-11-04 07:28:47.324751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.686 [2024-11-04 07:28:47.329036] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.686 [2024-11-04 07:28:47.329182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.686 [2024-11-04 07:28:47.329203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.686 [2024-11-04 07:28:47.333240] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.686 [2024-11-04 07:28:47.333478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.686 [2024-11-04 07:28:47.333510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.686 [2024-11-04 07:28:47.337737] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.686 [2024-11-04 07:28:47.337948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.686 [2024-11-04 07:28:47.337969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.686 [2024-11-04 07:28:47.342068] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.686 [2024-11-04 07:28:47.342216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.686 [2024-11-04 07:28:47.342236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.686 [2024-11-04 07:28:47.346411] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.686 [2024-11-04 07:28:47.346506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.686 [2024-11-04 07:28:47.346527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.686 [2024-11-04 07:28:47.350793] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.686 [2024-11-04 07:28:47.350994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.686 [2024-11-04 07:28:47.351015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.686 [2024-11-04 07:28:47.355124] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.686 [2024-11-04 07:28:47.355245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.686 [2024-11-04 07:28:47.355266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.686 [2024-11-04 07:28:47.359324] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.686 [2024-11-04 07:28:47.359425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.686 [2024-11-04 07:28:47.359445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.686 [2024-11-04 07:28:47.363670] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.686 [2024-11-04 07:28:47.363816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.686 [2024-11-04 07:28:47.363837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.686 [2024-11-04 07:28:47.367931] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.686 [2024-11-04 07:28:47.368133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.686 [2024-11-04 07:28:47.368154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.686 [2024-11-04 07:28:47.372258] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.686 [2024-11-04 07:28:47.372481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.686 [2024-11-04 07:28:47.372502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.686 [2024-11-04 07:28:47.376435] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.686 [2024-11-04 07:28:47.376575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.686 [2024-11-04 07:28:47.376595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.686 [2024-11-04 07:28:47.380738] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.686 [2024-11-04 07:28:47.380814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.686 [2024-11-04 07:28:47.380834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.686 [2024-11-04 07:28:47.385029] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.686 [2024-11-04 07:28:47.385173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.686 [2024-11-04 07:28:47.385193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.686 [2024-11-04 07:28:47.389332] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.686 [2024-11-04 07:28:47.389446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.686 [2024-11-04 07:28:47.389467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.686 [2024-11-04 07:28:47.393491] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.686 [2024-11-04 07:28:47.393573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.686 [2024-11-04 07:28:47.393593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.686 [2024-11-04 07:28:47.397982] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.686 [2024-11-04 07:28:47.398132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.687 [2024-11-04 07:28:47.398153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.687 [2024-11-04 07:28:47.402223] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.687 [2024-11-04 07:28:47.402482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.687 [2024-11-04 07:28:47.402502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.687 [2024-11-04 07:28:47.406481] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb4b280) with pdu=0x2000190fef90 00:22:45.687 [2024-11-04 07:28:47.406674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.687 [2024-11-04 07:28:47.406694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.687 00:22:45.687 Latency(us) 00:22:45.687 [2024-11-04T07:28:47.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.687 [2024-11-04T07:28:47.528Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:45.687 nvme0n1 : 2.00 6945.27 868.16 0.00 0.00 2299.32 1586.27 9413.35 00:22:45.687 [2024-11-04T07:28:47.528Z] =================================================================================================================== 00:22:45.687 [2024-11-04T07:28:47.528Z] Total : 6945.27 868.16 0.00 0.00 2299.32 1586.27 9413.35 00:22:45.687 0 00:22:45.687 07:28:47 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:45.687 07:28:47 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:45.687 07:28:47 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:45.687 | .driver_specific 00:22:45.687 | .nvme_error 00:22:45.687 | .status_code 00:22:45.687 | .command_transient_transport_error' 00:22:45.687 07:28:47 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:45.945 07:28:47 -- host/digest.sh@71 -- # (( 448 > 0 )) 00:22:45.945 07:28:47 -- host/digest.sh@73 -- # killprocess 97653 00:22:45.945 07:28:47 -- common/autotest_common.sh@926 -- # '[' -z 97653 ']' 00:22:45.945 07:28:47 -- common/autotest_common.sh@930 -- # kill -0 97653 00:22:45.945 07:28:47 -- common/autotest_common.sh@931 -- # uname 00:22:45.945 07:28:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:45.945 07:28:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97653 00:22:45.945 07:28:47 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:45.945 07:28:47 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:45.945 killing process with pid 97653 00:22:45.945 07:28:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97653' 00:22:45.945 Received shutdown signal, test time was about 2.000000 seconds 00:22:45.945 00:22:45.945 Latency(us) 00:22:45.945 [2024-11-04T07:28:47.786Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.945 [2024-11-04T07:28:47.786Z] =================================================================================================================== 00:22:45.945 [2024-11-04T07:28:47.786Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:45.945 07:28:47 -- common/autotest_common.sh@945 -- # kill 97653 00:22:45.945 07:28:47 -- common/autotest_common.sh@950 -- # wait 97653 00:22:46.204 07:28:47 -- host/digest.sh@115 -- # killprocess 97337 00:22:46.204 07:28:47 -- common/autotest_common.sh@926 -- # '[' -z 97337 ']' 00:22:46.204 07:28:47 -- common/autotest_common.sh@930 -- # kill -0 97337 00:22:46.204 07:28:47 -- common/autotest_common.sh@931 -- # uname 00:22:46.204 07:28:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:46.204 07:28:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97337 00:22:46.204 07:28:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:46.204 07:28:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:46.204 killing process with pid 97337 00:22:46.204 07:28:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97337' 00:22:46.204 07:28:48 -- common/autotest_common.sh@945 -- # kill 97337 00:22:46.204 07:28:48 -- common/autotest_common.sh@950 -- # wait 97337 00:22:46.462 00:22:46.462 real 0m18.338s 00:22:46.462 user 0m33.607s 00:22:46.462 sys 0m5.497s 00:22:46.462 07:28:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:46.462 07:28:48 -- common/autotest_common.sh@10 -- # set +x 00:22:46.462 ************************************ 00:22:46.462 END TEST nvmf_digest_error 00:22:46.462 ************************************ 00:22:46.462 07:28:48 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:22:46.462 07:28:48 -- host/digest.sh@139 -- # nvmftestfini 00:22:46.462 07:28:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:46.462 07:28:48 -- nvmf/common.sh@116 -- # sync 00:22:46.722 07:28:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:46.722 07:28:48 -- nvmf/common.sh@119 -- # set +e 00:22:46.722 07:28:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:46.722 07:28:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:46.722 rmmod nvme_tcp 00:22:46.722 rmmod nvme_fabrics 00:22:46.722 rmmod nvme_keyring 00:22:46.722 07:28:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:46.722 07:28:48 -- nvmf/common.sh@123 -- # set -e 00:22:46.722 07:28:48 -- nvmf/common.sh@124 -- # return 0 00:22:46.722 07:28:48 -- nvmf/common.sh@477 -- # '[' -n 97337 ']' 00:22:46.722 07:28:48 -- nvmf/common.sh@478 -- # killprocess 97337 00:22:46.722 07:28:48 -- common/autotest_common.sh@926 -- # '[' -z 97337 ']' 00:22:46.722 07:28:48 -- common/autotest_common.sh@930 -- # kill -0 97337 00:22:46.722 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (97337) - No such process 00:22:46.722 Process with pid 97337 is not found 00:22:46.722 07:28:48 -- common/autotest_common.sh@953 -- # echo 'Process with pid 97337 is not found' 00:22:46.722 07:28:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:46.722 07:28:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:46.722 07:28:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:46.722 07:28:48 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:46.722 07:28:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:46.722 07:28:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.722 07:28:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:46.722 07:28:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.722 07:28:48 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:46.722 00:22:46.722 real 0m37.100s 00:22:46.722 user 1m6.624s 00:22:46.722 sys 0m11.301s 00:22:46.722 07:28:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:46.722 07:28:48 -- common/autotest_common.sh@10 -- # set +x 00:22:46.722 ************************************ 00:22:46.722 END TEST nvmf_digest 00:22:46.722 ************************************ 00:22:46.722 07:28:48 -- nvmf/nvmf.sh@110 -- # [[ 1 -eq 1 ]] 00:22:46.722 07:28:48 -- nvmf/nvmf.sh@110 -- # [[ tcp == \t\c\p ]] 00:22:46.722 07:28:48 -- nvmf/nvmf.sh@112 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:22:46.722 07:28:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:46.722 07:28:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:46.722 07:28:48 -- common/autotest_common.sh@10 -- # set +x 00:22:46.722 ************************************ 00:22:46.722 START TEST nvmf_mdns_discovery 00:22:46.722 ************************************ 00:22:46.722 07:28:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:22:46.722 * Looking for test storage... 00:22:46.722 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:46.982 07:28:48 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:46.982 07:28:48 -- nvmf/common.sh@7 -- # uname -s 00:22:46.982 07:28:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:46.982 07:28:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:46.982 07:28:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:46.982 07:28:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:46.982 07:28:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:46.982 07:28:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:46.982 07:28:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:46.982 07:28:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:46.982 07:28:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:46.982 07:28:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:46.982 07:28:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:22:46.982 07:28:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:22:46.982 07:28:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:46.982 07:28:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:46.982 07:28:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:46.982 07:28:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:46.982 07:28:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:46.982 07:28:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:46.982 07:28:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:46.982 07:28:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.982 07:28:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.982 07:28:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.982 07:28:48 -- paths/export.sh@5 -- # export PATH 00:22:46.982 07:28:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.982 07:28:48 -- nvmf/common.sh@46 -- # : 0 00:22:46.982 07:28:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:46.982 07:28:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:46.982 07:28:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:46.982 07:28:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:46.982 07:28:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:46.982 07:28:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:46.982 07:28:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:46.982 07:28:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:46.982 07:28:48 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:22:46.982 07:28:48 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:22:46.982 07:28:48 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:46.982 07:28:48 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:46.982 07:28:48 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:22:46.982 07:28:48 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:46.982 07:28:48 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:22:46.982 07:28:48 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:22:46.982 07:28:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:46.982 07:28:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:46.982 07:28:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:46.982 07:28:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:46.982 07:28:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:46.982 07:28:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.982 07:28:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:46.982 07:28:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.982 07:28:48 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:46.982 07:28:48 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:46.982 07:28:48 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:46.982 07:28:48 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:46.982 07:28:48 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:46.982 07:28:48 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:46.982 07:28:48 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:46.982 07:28:48 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:46.982 07:28:48 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:46.982 07:28:48 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:46.982 07:28:48 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:46.982 07:28:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:46.982 07:28:48 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:46.982 07:28:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:46.982 07:28:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:46.982 07:28:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:46.982 07:28:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:46.982 07:28:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:46.982 07:28:48 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:46.982 07:28:48 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:46.982 Cannot find device "nvmf_tgt_br" 00:22:46.982 07:28:48 -- nvmf/common.sh@154 -- # true 00:22:46.982 07:28:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:46.982 Cannot find device "nvmf_tgt_br2" 00:22:46.982 07:28:48 -- nvmf/common.sh@155 -- # true 00:22:46.982 07:28:48 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:46.982 07:28:48 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:46.982 Cannot find device "nvmf_tgt_br" 00:22:46.982 07:28:48 -- nvmf/common.sh@157 -- # true 00:22:46.982 07:28:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:46.982 Cannot find device "nvmf_tgt_br2" 00:22:46.982 07:28:48 -- nvmf/common.sh@158 -- # true 00:22:46.982 07:28:48 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:46.982 07:28:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:46.982 07:28:48 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:46.982 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:46.982 07:28:48 -- nvmf/common.sh@161 -- # true 00:22:46.982 07:28:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:46.982 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:46.982 07:28:48 -- nvmf/common.sh@162 -- # true 00:22:46.982 07:28:48 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:46.982 07:28:48 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:46.982 07:28:48 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:46.982 07:28:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:46.982 07:28:48 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:46.982 07:28:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:46.982 07:28:48 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:46.982 07:28:48 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:46.982 07:28:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:46.982 07:28:48 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:46.982 07:28:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:46.982 07:28:48 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:46.982 07:28:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:46.982 07:28:48 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:46.982 07:28:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:46.982 07:28:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:47.242 07:28:48 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:47.242 07:28:48 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:47.242 07:28:48 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:47.242 07:28:48 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:47.242 07:28:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:47.242 07:28:48 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:47.242 07:28:48 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:47.242 07:28:48 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:47.242 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:47.242 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:22:47.242 00:22:47.242 --- 10.0.0.2 ping statistics --- 00:22:47.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.242 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:22:47.242 07:28:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:47.242 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:47.242 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:22:47.242 00:22:47.242 --- 10.0.0.3 ping statistics --- 00:22:47.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.242 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:22:47.242 07:28:48 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:47.242 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:47.242 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:22:47.242 00:22:47.242 --- 10.0.0.1 ping statistics --- 00:22:47.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.242 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:22:47.242 07:28:48 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:47.242 07:28:48 -- nvmf/common.sh@421 -- # return 0 00:22:47.242 07:28:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:47.242 07:28:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:47.242 07:28:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:47.242 07:28:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:47.242 07:28:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:47.242 07:28:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:47.242 07:28:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:47.242 07:28:48 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:47.242 07:28:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:47.242 07:28:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:47.242 07:28:48 -- common/autotest_common.sh@10 -- # set +x 00:22:47.242 07:28:48 -- nvmf/common.sh@469 -- # nvmfpid=97942 00:22:47.242 07:28:48 -- nvmf/common.sh@470 -- # waitforlisten 97942 00:22:47.242 07:28:48 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:47.242 07:28:48 -- common/autotest_common.sh@819 -- # '[' -z 97942 ']' 00:22:47.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.242 07:28:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.242 07:28:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:47.242 07:28:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.242 07:28:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:47.242 07:28:48 -- common/autotest_common.sh@10 -- # set +x 00:22:47.242 [2024-11-04 07:28:48.983805] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:22:47.242 [2024-11-04 07:28:48.983906] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:47.501 [2024-11-04 07:28:49.123824] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.501 [2024-11-04 07:28:49.197734] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:47.501 [2024-11-04 07:28:49.198225] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:47.501 [2024-11-04 07:28:49.198367] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:47.501 [2024-11-04 07:28:49.198487] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:47.501 [2024-11-04 07:28:49.198640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:47.501 07:28:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:47.501 07:28:49 -- common/autotest_common.sh@852 -- # return 0 00:22:47.501 07:28:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:47.501 07:28:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:47.501 07:28:49 -- common/autotest_common.sh@10 -- # set +x 00:22:47.501 07:28:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:47.501 07:28:49 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:22:47.501 07:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:47.501 07:28:49 -- common/autotest_common.sh@10 -- # set +x 00:22:47.501 07:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:47.501 07:28:49 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:22:47.501 07:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:47.501 07:28:49 -- common/autotest_common.sh@10 -- # set +x 00:22:47.760 07:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:47.760 07:28:49 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:47.760 07:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:47.760 07:28:49 -- common/autotest_common.sh@10 -- # set +x 00:22:47.760 [2024-11-04 07:28:49.412029] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:47.760 07:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:47.760 07:28:49 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:47.760 07:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:47.760 07:28:49 -- common/autotest_common.sh@10 -- # set +x 00:22:47.760 [2024-11-04 07:28:49.420161] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:47.760 07:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:47.760 07:28:49 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:47.760 07:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:47.760 07:28:49 -- common/autotest_common.sh@10 -- # set +x 00:22:47.760 null0 00:22:47.760 07:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:47.760 07:28:49 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:47.760 07:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:47.760 07:28:49 -- common/autotest_common.sh@10 -- # set +x 00:22:47.760 null1 00:22:47.760 07:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:47.760 07:28:49 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:22:47.760 07:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:47.760 07:28:49 -- common/autotest_common.sh@10 -- # set +x 00:22:47.760 null2 00:22:47.760 07:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:47.760 07:28:49 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:22:47.760 07:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:47.760 07:28:49 -- common/autotest_common.sh@10 -- # set +x 00:22:47.760 null3 00:22:47.760 07:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:47.760 07:28:49 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:22:47.760 07:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:47.760 07:28:49 -- common/autotest_common.sh@10 -- # set +x 00:22:47.760 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:47.760 07:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:47.760 07:28:49 -- host/mdns_discovery.sh@47 -- # hostpid=97977 00:22:47.760 07:28:49 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:47.760 07:28:49 -- host/mdns_discovery.sh@48 -- # waitforlisten 97977 /tmp/host.sock 00:22:47.760 07:28:49 -- common/autotest_common.sh@819 -- # '[' -z 97977 ']' 00:22:47.760 07:28:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:22:47.760 07:28:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:47.760 07:28:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:47.760 07:28:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:47.760 07:28:49 -- common/autotest_common.sh@10 -- # set +x 00:22:47.760 [2024-11-04 07:28:49.527646] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:22:47.760 [2024-11-04 07:28:49.527744] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97977 ] 00:22:48.018 [2024-11-04 07:28:49.670719] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.018 [2024-11-04 07:28:49.737525] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:48.018 [2024-11-04 07:28:49.737725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.951 07:28:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:48.951 07:28:50 -- common/autotest_common.sh@852 -- # return 0 00:22:48.951 07:28:50 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:22:48.951 07:28:50 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:22:48.951 07:28:50 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:22:48.951 07:28:50 -- host/mdns_discovery.sh@57 -- # avahipid=98007 00:22:48.951 07:28:50 -- host/mdns_discovery.sh@58 -- # sleep 1 00:22:48.951 07:28:50 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:22:48.951 07:28:50 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:22:48.951 Process 1061 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:22:48.951 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:22:48.951 Successfully dropped root privileges. 00:22:48.951 avahi-daemon 0.8 starting up. 00:22:48.951 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:22:49.886 Successfully called chroot(). 00:22:49.886 Successfully dropped remaining capabilities. 00:22:49.886 No service file found in /etc/avahi/services. 00:22:49.886 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:22:49.886 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:22:49.886 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:22:49.886 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:22:49.886 Network interface enumeration completed. 00:22:49.886 Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*. 00:22:49.886 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:22:49.886 Registering new address record for fe80::3c47:c4ff:feac:c7a5 on nvmf_tgt_if.*. 00:22:49.886 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:22:49.886 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 1112525070. 00:22:49.886 07:28:51 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:49.886 07:28:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.886 07:28:51 -- common/autotest_common.sh@10 -- # set +x 00:22:49.886 07:28:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.886 07:28:51 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:22:49.886 07:28:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.886 07:28:51 -- common/autotest_common.sh@10 -- # set +x 00:22:49.886 07:28:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.886 07:28:51 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:22:49.886 07:28:51 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:22:49.886 07:28:51 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:49.886 07:28:51 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:49.886 07:28:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.886 07:28:51 -- host/mdns_discovery.sh@68 -- # xargs 00:22:49.886 07:28:51 -- common/autotest_common.sh@10 -- # set +x 00:22:49.886 07:28:51 -- host/mdns_discovery.sh@68 -- # sort 00:22:49.886 07:28:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.145 07:28:51 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:22:50.145 07:28:51 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:22:50.145 07:28:51 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:50.145 07:28:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.145 07:28:51 -- common/autotest_common.sh@10 -- # set +x 00:22:50.145 07:28:51 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:50.145 07:28:51 -- host/mdns_discovery.sh@64 -- # sort 00:22:50.145 07:28:51 -- host/mdns_discovery.sh@64 -- # xargs 00:22:50.145 07:28:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.145 07:28:51 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:22:50.145 07:28:51 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:50.145 07:28:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.145 07:28:51 -- common/autotest_common.sh@10 -- # set +x 00:22:50.145 07:28:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.145 07:28:51 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:22:50.145 07:28:51 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:50.145 07:28:51 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:50.145 07:28:51 -- host/mdns_discovery.sh@68 -- # sort 00:22:50.145 07:28:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.145 07:28:51 -- common/autotest_common.sh@10 -- # set +x 00:22:50.145 07:28:51 -- host/mdns_discovery.sh@68 -- # xargs 00:22:50.145 07:28:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.145 07:28:51 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:22:50.145 07:28:51 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:22:50.145 07:28:51 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:50.145 07:28:51 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:50.145 07:28:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.145 07:28:51 -- host/mdns_discovery.sh@64 -- # sort 00:22:50.145 07:28:51 -- common/autotest_common.sh@10 -- # set +x 00:22:50.145 07:28:51 -- host/mdns_discovery.sh@64 -- # xargs 00:22:50.145 07:28:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.145 07:28:51 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:22:50.145 07:28:51 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:50.145 07:28:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.145 07:28:51 -- common/autotest_common.sh@10 -- # set +x 00:22:50.145 07:28:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.145 07:28:51 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:22:50.145 07:28:51 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:50.145 07:28:51 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:50.145 07:28:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.145 07:28:51 -- host/mdns_discovery.sh@68 -- # sort 00:22:50.145 07:28:51 -- common/autotest_common.sh@10 -- # set +x 00:22:50.145 07:28:51 -- host/mdns_discovery.sh@68 -- # xargs 00:22:50.145 07:28:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.145 [2024-11-04 07:28:51.969201] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:22:50.145 07:28:51 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:22:50.145 07:28:51 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:22:50.145 07:28:51 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:50.145 07:28:51 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:50.145 07:28:51 -- host/mdns_discovery.sh@64 -- # sort 00:22:50.145 07:28:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.145 07:28:51 -- host/mdns_discovery.sh@64 -- # xargs 00:22:50.145 07:28:51 -- common/autotest_common.sh@10 -- # set +x 00:22:50.145 07:28:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.404 07:28:52 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:22:50.404 07:28:52 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:50.404 07:28:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.404 07:28:52 -- common/autotest_common.sh@10 -- # set +x 00:22:50.404 [2024-11-04 07:28:52.028941] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.404 07:28:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.404 07:28:52 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:50.404 07:28:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.404 07:28:52 -- common/autotest_common.sh@10 -- # set +x 00:22:50.404 07:28:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.404 07:28:52 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:22:50.404 07:28:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.404 07:28:52 -- common/autotest_common.sh@10 -- # set +x 00:22:50.404 07:28:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.404 07:28:52 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:22:50.404 07:28:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.404 07:28:52 -- common/autotest_common.sh@10 -- # set +x 00:22:50.404 07:28:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.404 07:28:52 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:22:50.404 07:28:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.404 07:28:52 -- common/autotest_common.sh@10 -- # set +x 00:22:50.404 07:28:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.404 07:28:52 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:22:50.404 07:28:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.404 07:28:52 -- common/autotest_common.sh@10 -- # set +x 00:22:50.404 [2024-11-04 07:28:52.068857] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:22:50.404 07:28:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.404 07:28:52 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:22:50.404 07:28:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.404 07:28:52 -- common/autotest_common.sh@10 -- # set +x 00:22:50.404 [2024-11-04 07:28:52.076849] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:50.404 07:28:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.404 07:28:52 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=98065 00:22:50.404 07:28:52 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:22:50.404 07:28:52 -- host/mdns_discovery.sh@125 -- # sleep 5 00:22:51.358 [2024-11-04 07:28:52.869204] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:22:51.358 Established under name 'CDC' 00:22:51.633 [2024-11-04 07:28:53.269216] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:22:51.633 [2024-11-04 07:28:53.269240] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:22:51.633 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:22:51.633 cookie is 0 00:22:51.633 is_local: 1 00:22:51.633 our_own: 0 00:22:51.633 wide_area: 0 00:22:51.633 multicast: 1 00:22:51.633 cached: 1 00:22:51.633 [2024-11-04 07:28:53.369210] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:22:51.633 [2024-11-04 07:28:53.369233] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:22:51.633 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:22:51.633 cookie is 0 00:22:51.633 is_local: 1 00:22:51.633 our_own: 0 00:22:51.633 wide_area: 0 00:22:51.633 multicast: 1 00:22:51.633 cached: 1 00:22:52.569 [2024-11-04 07:28:54.277193] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:22:52.569 [2024-11-04 07:28:54.277238] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:22:52.569 [2024-11-04 07:28:54.277256] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:52.569 [2024-11-04 07:28:54.363292] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:22:52.569 [2024-11-04 07:28:54.376850] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:52.569 [2024-11-04 07:28:54.376871] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:52.569 [2024-11-04 07:28:54.376914] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:52.827 [2024-11-04 07:28:54.424329] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:22:52.827 [2024-11-04 07:28:54.424356] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:22:52.828 [2024-11-04 07:28:54.465569] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:22:52.828 [2024-11-04 07:28:54.527139] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:22:52.828 [2024-11-04 07:28:54.527165] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:55.359 07:28:57 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:22:55.359 07:28:57 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:22:55.359 07:28:57 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:22:55.359 07:28:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.359 07:28:57 -- common/autotest_common.sh@10 -- # set +x 00:22:55.359 07:28:57 -- host/mdns_discovery.sh@80 -- # sort 00:22:55.359 07:28:57 -- host/mdns_discovery.sh@80 -- # xargs 00:22:55.359 07:28:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.359 07:28:57 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:22:55.359 07:28:57 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:22:55.359 07:28:57 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:55.359 07:28:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.359 07:28:57 -- common/autotest_common.sh@10 -- # set +x 00:22:55.359 07:28:57 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:22:55.359 07:28:57 -- host/mdns_discovery.sh@76 -- # sort 00:22:55.359 07:28:57 -- host/mdns_discovery.sh@76 -- # xargs 00:22:55.359 07:28:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.359 07:28:57 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:22:55.618 07:28:57 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:22:55.618 07:28:57 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:55.618 07:28:57 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:55.618 07:28:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.618 07:28:57 -- common/autotest_common.sh@10 -- # set +x 00:22:55.618 07:28:57 -- host/mdns_discovery.sh@68 -- # sort 00:22:55.618 07:28:57 -- host/mdns_discovery.sh@68 -- # xargs 00:22:55.618 07:28:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.618 07:28:57 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:22:55.618 07:28:57 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:22:55.618 07:28:57 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:55.618 07:28:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.618 07:28:57 -- common/autotest_common.sh@10 -- # set +x 00:22:55.618 07:28:57 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:55.618 07:28:57 -- host/mdns_discovery.sh@64 -- # sort 00:22:55.618 07:28:57 -- host/mdns_discovery.sh@64 -- # xargs 00:22:55.618 07:28:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.618 07:28:57 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:22:55.618 07:28:57 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:22:55.618 07:28:57 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:22:55.618 07:28:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.618 07:28:57 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:55.618 07:28:57 -- common/autotest_common.sh@10 -- # set +x 00:22:55.618 07:28:57 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:55.618 07:28:57 -- host/mdns_discovery.sh@72 -- # xargs 00:22:55.618 07:28:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.618 07:28:57 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:22:55.618 07:28:57 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:22:55.618 07:28:57 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:55.618 07:28:57 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:22:55.618 07:28:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.618 07:28:57 -- common/autotest_common.sh@10 -- # set +x 00:22:55.618 07:28:57 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:55.618 07:28:57 -- host/mdns_discovery.sh@72 -- # xargs 00:22:55.618 07:28:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.618 07:28:57 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:22:55.618 07:28:57 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:22:55.618 07:28:57 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:55.618 07:28:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.618 07:28:57 -- common/autotest_common.sh@10 -- # set +x 00:22:55.618 07:28:57 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:22:55.618 07:28:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.877 07:28:57 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:22:55.877 07:28:57 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:22:55.877 07:28:57 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:22:55.877 07:28:57 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:55.877 07:28:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.877 07:28:57 -- common/autotest_common.sh@10 -- # set +x 00:22:55.877 07:28:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.877 07:28:57 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:22:55.877 07:28:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.877 07:28:57 -- common/autotest_common.sh@10 -- # set +x 00:22:55.877 07:28:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.877 07:28:57 -- host/mdns_discovery.sh@139 -- # sleep 1 00:22:56.812 07:28:58 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:22:56.812 07:28:58 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:56.812 07:28:58 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:56.812 07:28:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:56.812 07:28:58 -- host/mdns_discovery.sh@64 -- # sort 00:22:56.812 07:28:58 -- common/autotest_common.sh@10 -- # set +x 00:22:56.812 07:28:58 -- host/mdns_discovery.sh@64 -- # xargs 00:22:56.812 07:28:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:56.812 07:28:58 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:22:56.812 07:28:58 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:22:56.812 07:28:58 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:56.812 07:28:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:56.812 07:28:58 -- common/autotest_common.sh@10 -- # set +x 00:22:56.812 07:28:58 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:22:56.812 07:28:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:56.812 07:28:58 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:22:56.812 07:28:58 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:22:56.812 07:28:58 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:22:56.812 07:28:58 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:56.812 07:28:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:56.812 07:28:58 -- common/autotest_common.sh@10 -- # set +x 00:22:56.812 [2024-11-04 07:28:58.599340] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:56.812 [2024-11-04 07:28:58.599792] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:56.812 [2024-11-04 07:28:58.599817] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:56.812 [2024-11-04 07:28:58.599848] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:56.812 [2024-11-04 07:28:58.599861] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:56.812 07:28:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:56.812 07:28:58 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:22:56.812 07:28:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:56.812 07:28:58 -- common/autotest_common.sh@10 -- # set +x 00:22:56.812 [2024-11-04 07:28:58.607307] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:56.812 [2024-11-04 07:28:58.607802] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:56.812 [2024-11-04 07:28:58.607849] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:56.812 07:28:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:56.812 07:28:58 -- host/mdns_discovery.sh@149 -- # sleep 1 00:22:57.071 [2024-11-04 07:28:58.740909] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:22:57.071 [2024-11-04 07:28:58.741052] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:22:57.071 [2024-11-04 07:28:58.800104] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:22:57.071 [2024-11-04 07:28:58.800125] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:57.071 [2024-11-04 07:28:58.800130] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:57.071 [2024-11-04 07:28:58.800146] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:57.071 [2024-11-04 07:28:58.800206] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:22:57.071 [2024-11-04 07:28:58.800215] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:22:57.071 [2024-11-04 07:28:58.800220] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:57.071 [2024-11-04 07:28:58.800231] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:57.071 [2024-11-04 07:28:58.846046] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:57.071 [2024-11-04 07:28:58.846185] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:57.071 [2024-11-04 07:28:58.846249] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:22:57.071 [2024-11-04 07:28:58.846258] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:58.007 07:28:59 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:22:58.007 07:28:59 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:58.007 07:28:59 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:58.007 07:28:59 -- host/mdns_discovery.sh@68 -- # sort 00:22:58.007 07:28:59 -- host/mdns_discovery.sh@68 -- # xargs 00:22:58.007 07:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:58.007 07:28:59 -- common/autotest_common.sh@10 -- # set +x 00:22:58.007 07:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:58.007 07:28:59 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:22:58.007 07:28:59 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:22:58.007 07:28:59 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:58.007 07:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:58.007 07:28:59 -- host/mdns_discovery.sh@64 -- # sort 00:22:58.007 07:28:59 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:58.007 07:28:59 -- common/autotest_common.sh@10 -- # set +x 00:22:58.007 07:28:59 -- host/mdns_discovery.sh@64 -- # xargs 00:22:58.007 07:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:58.007 07:28:59 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:22:58.007 07:28:59 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:22:58.007 07:28:59 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:22:58.007 07:28:59 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:58.007 07:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:58.007 07:28:59 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:58.007 07:28:59 -- common/autotest_common.sh@10 -- # set +x 00:22:58.007 07:28:59 -- host/mdns_discovery.sh@72 -- # xargs 00:22:58.007 07:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:58.007 07:28:59 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:58.007 07:28:59 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:22:58.007 07:28:59 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:22:58.007 07:28:59 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:58.007 07:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:58.007 07:28:59 -- common/autotest_common.sh@10 -- # set +x 00:22:58.007 07:28:59 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:58.007 07:28:59 -- host/mdns_discovery.sh@72 -- # xargs 00:22:58.007 07:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:58.268 07:28:59 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:58.268 07:28:59 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:22:58.268 07:28:59 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:22:58.268 07:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:58.268 07:28:59 -- common/autotest_common.sh@10 -- # set +x 00:22:58.268 07:28:59 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:22:58.268 07:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:58.268 07:28:59 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:22:58.268 07:28:59 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:22:58.268 07:28:59 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:22:58.268 07:28:59 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:58.268 07:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:58.268 07:28:59 -- common/autotest_common.sh@10 -- # set +x 00:22:58.268 [2024-11-04 07:28:59.916551] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:58.268 [2024-11-04 07:28:59.916579] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:58.268 [2024-11-04 07:28:59.916608] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:58.268 [2024-11-04 07:28:59.916619] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:58.268 [2024-11-04 07:28:59.916815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.268 [2024-11-04 07:28:59.916843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.268 [2024-11-04 07:28:59.916871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.268 [2024-11-04 07:28:59.916895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.268 [2024-11-04 07:28:59.916913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.268 [2024-11-04 07:28:59.916923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.268 [2024-11-04 07:28:59.916932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.268 [2024-11-04 07:28:59.916939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.268 [2024-11-04 07:28:59.916947] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61aa0 is same with the state(5) to be set 00:22:58.268 07:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:58.268 07:28:59 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:22:58.268 07:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:58.268 07:28:59 -- common/autotest_common.sh@10 -- # set +x 00:22:58.268 [2024-11-04 07:28:59.924566] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:58.268 [2024-11-04 07:28:59.924615] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:58.268 [2024-11-04 07:28:59.926775] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe61aa0 (9): Bad file descriptor 00:22:58.268 07:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:58.268 07:28:59 -- host/mdns_discovery.sh@162 -- # sleep 1 00:22:58.268 [2024-11-04 07:28:59.932699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.268 [2024-11-04 07:28:59.932730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.268 [2024-11-04 07:28:59.932757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.268 [2024-11-04 07:28:59.932765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.268 [2024-11-04 07:28:59.932773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.268 [2024-11-04 07:28:59.932781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.268 [2024-11-04 07:28:59.932789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.268 [2024-11-04 07:28:59.932797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.268 [2024-11-04 07:28:59.932804] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4c760 is same with the state(5) to be set 00:22:58.268 [2024-11-04 07:28:59.936793] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:58.269 [2024-11-04 07:28:59.936910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.269 [2024-11-04 07:28:59.936958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.269 [2024-11-04 07:28:59.936974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe61aa0 with addr=10.0.0.2, port=4420 00:22:58.269 [2024-11-04 07:28:59.936982] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61aa0 is same with the state(5) to be set 00:22:58.269 [2024-11-04 07:28:59.936999] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe61aa0 (9): Bad file descriptor 00:22:58.269 [2024-11-04 07:28:59.937011] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:58.269 [2024-11-04 07:28:59.937019] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:58.269 [2024-11-04 07:28:59.937028] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:58.269 [2024-11-04 07:28:59.937042] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.269 [2024-11-04 07:28:59.942668] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe4c760 (9): Bad file descriptor 00:22:58.269 [2024-11-04 07:28:59.946844] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:58.269 [2024-11-04 07:28:59.947134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.269 [2024-11-04 07:28:59.947182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.269 [2024-11-04 07:28:59.947198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe61aa0 with addr=10.0.0.2, port=4420 00:22:58.269 [2024-11-04 07:28:59.947207] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61aa0 is same with the state(5) to be set 00:22:58.269 [2024-11-04 07:28:59.947223] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe61aa0 (9): Bad file descriptor 00:22:58.269 [2024-11-04 07:28:59.947237] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:58.269 [2024-11-04 07:28:59.947245] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:58.269 [2024-11-04 07:28:59.947254] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:58.269 [2024-11-04 07:28:59.947268] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.269 [2024-11-04 07:28:59.952676] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:58.269 [2024-11-04 07:28:59.952759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.269 [2024-11-04 07:28:59.952802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.269 [2024-11-04 07:28:59.952816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe4c760 with addr=10.0.0.3, port=4420 00:22:58.269 [2024-11-04 07:28:59.952824] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4c760 is same with the state(5) to be set 00:22:58.269 [2024-11-04 07:28:59.952838] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe4c760 (9): Bad file descriptor 00:22:58.269 [2024-11-04 07:28:59.952849] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:58.269 [2024-11-04 07:28:59.952856] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:58.269 [2024-11-04 07:28:59.952863] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:58.269 [2024-11-04 07:28:59.952888] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.269 [2024-11-04 07:28:59.957091] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:58.269 [2024-11-04 07:28:59.957163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.269 [2024-11-04 07:28:59.957203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.269 [2024-11-04 07:28:59.957217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe61aa0 with addr=10.0.0.2, port=4420 00:22:58.269 [2024-11-04 07:28:59.957225] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61aa0 is same with the state(5) to be set 00:22:58.269 [2024-11-04 07:28:59.957238] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe61aa0 (9): Bad file descriptor 00:22:58.269 [2024-11-04 07:28:59.957250] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:58.269 [2024-11-04 07:28:59.957256] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:58.269 [2024-11-04 07:28:59.957264] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:58.269 [2024-11-04 07:28:59.957276] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.269 [2024-11-04 07:28:59.962727] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:58.269 [2024-11-04 07:28:59.962814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.269 [2024-11-04 07:28:59.962857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.269 [2024-11-04 07:28:59.962882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe4c760 with addr=10.0.0.3, port=4420 00:22:58.269 [2024-11-04 07:28:59.962893] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4c760 is same with the state(5) to be set 00:22:58.269 [2024-11-04 07:28:59.962907] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe4c760 (9): Bad file descriptor 00:22:58.269 [2024-11-04 07:28:59.962919] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:58.269 [2024-11-04 07:28:59.962926] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:58.269 [2024-11-04 07:28:59.962933] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:58.269 [2024-11-04 07:28:59.962946] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.269 [2024-11-04 07:28:59.967136] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:58.269 [2024-11-04 07:28:59.967206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.269 [2024-11-04 07:28:59.967246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.269 [2024-11-04 07:28:59.967260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe61aa0 with addr=10.0.0.2, port=4420 00:22:58.269 [2024-11-04 07:28:59.967268] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61aa0 is same with the state(5) to be set 00:22:58.269 [2024-11-04 07:28:59.967281] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe61aa0 (9): Bad file descriptor 00:22:58.269 [2024-11-04 07:28:59.967292] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:58.269 [2024-11-04 07:28:59.967299] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:58.269 [2024-11-04 07:28:59.967306] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:58.269 [2024-11-04 07:28:59.967318] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.269 [2024-11-04 07:28:59.972782] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:58.269 [2024-11-04 07:28:59.973048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.269 [2024-11-04 07:28:59.973097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.269 [2024-11-04 07:28:59.973113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe4c760 with addr=10.0.0.3, port=4420 00:22:58.269 [2024-11-04 07:28:59.973123] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4c760 is same with the state(5) to be set 00:22:58.269 [2024-11-04 07:28:59.973139] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe4c760 (9): Bad file descriptor 00:22:58.269 [2024-11-04 07:28:59.973153] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:58.269 [2024-11-04 07:28:59.973160] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:58.269 [2024-11-04 07:28:59.973169] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:58.269 [2024-11-04 07:28:59.973184] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.269 [2024-11-04 07:28:59.977180] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:58.269 [2024-11-04 07:28:59.977255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.269 [2024-11-04 07:28:59.977295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.269 [2024-11-04 07:28:59.977310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe61aa0 with addr=10.0.0.2, port=4420 00:22:58.269 [2024-11-04 07:28:59.977318] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61aa0 is same with the state(5) to be set 00:22:58.269 [2024-11-04 07:28:59.977331] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe61aa0 (9): Bad file descriptor 00:22:58.269 [2024-11-04 07:28:59.977343] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:58.269 [2024-11-04 07:28:59.977349] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:58.269 [2024-11-04 07:28:59.977357] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:58.269 [2024-11-04 07:28:59.977369] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.269 [2024-11-04 07:28:59.983019] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:58.269 [2024-11-04 07:28:59.983091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.269 [2024-11-04 07:28:59.983132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.269 [2024-11-04 07:28:59.983146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe4c760 with addr=10.0.0.3, port=4420 00:22:58.269 [2024-11-04 07:28:59.983154] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4c760 is same with the state(5) to be set 00:22:58.269 [2024-11-04 07:28:59.983167] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe4c760 (9): Bad file descriptor 00:22:58.269 [2024-11-04 07:28:59.983179] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:58.269 [2024-11-04 07:28:59.983185] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:58.269 [2024-11-04 07:28:59.983192] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:58.269 [2024-11-04 07:28:59.983204] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.269 [2024-11-04 07:28:59.987228] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:58.269 [2024-11-04 07:28:59.987450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.269 [2024-11-04 07:28:59.987498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.269 [2024-11-04 07:28:59.987514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe61aa0 with addr=10.0.0.2, port=4420 00:22:58.270 [2024-11-04 07:28:59.987524] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61aa0 is same with the state(5) to be set 00:22:58.270 [2024-11-04 07:28:59.987539] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe61aa0 (9): Bad file descriptor 00:22:58.270 [2024-11-04 07:28:59.987552] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:58.270 [2024-11-04 07:28:59.987560] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:58.270 [2024-11-04 07:28:59.987568] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:58.270 [2024-11-04 07:28:59.987582] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.270 [2024-11-04 07:28:59.993067] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:58.270 [2024-11-04 07:28:59.993140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.270 [2024-11-04 07:28:59.993181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.270 [2024-11-04 07:28:59.993195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe4c760 with addr=10.0.0.3, port=4420 00:22:58.270 [2024-11-04 07:28:59.993203] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4c760 is same with the state(5) to be set 00:22:58.270 [2024-11-04 07:28:59.993216] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe4c760 (9): Bad file descriptor 00:22:58.270 [2024-11-04 07:28:59.993227] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:58.270 [2024-11-04 07:28:59.993234] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:58.270 [2024-11-04 07:28:59.993241] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:58.270 [2024-11-04 07:28:59.993253] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.270 [2024-11-04 07:28:59.997412] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:58.270 [2024-11-04 07:28:59.997639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.270 [2024-11-04 07:28:59.997684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.270 [2024-11-04 07:28:59.997700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe61aa0 with addr=10.0.0.2, port=4420 00:22:58.270 [2024-11-04 07:28:59.997709] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61aa0 is same with the state(5) to be set 00:22:58.270 [2024-11-04 07:28:59.997725] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe61aa0 (9): Bad file descriptor 00:22:58.270 [2024-11-04 07:28:59.997756] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:58.270 [2024-11-04 07:28:59.997767] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:58.270 [2024-11-04 07:28:59.997775] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:58.270 [2024-11-04 07:28:59.997788] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.270 [2024-11-04 07:29:00.003118] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:58.270 [2024-11-04 07:29:00.003210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.270 [2024-11-04 07:29:00.003259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.270 [2024-11-04 07:29:00.003275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe4c760 with addr=10.0.0.3, port=4420 00:22:58.270 [2024-11-04 07:29:00.003285] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4c760 is same with the state(5) to be set 00:22:58.270 [2024-11-04 07:29:00.003301] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe4c760 (9): Bad file descriptor 00:22:58.270 [2024-11-04 07:29:00.003315] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:58.270 [2024-11-04 07:29:00.003324] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:58.270 [2024-11-04 07:29:00.003333] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:58.270 [2024-11-04 07:29:00.003348] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.270 [2024-11-04 07:29:00.007604] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:58.270 [2024-11-04 07:29:00.007903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.270 [2024-11-04 07:29:00.008206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.270 [2024-11-04 07:29:00.008329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe61aa0 with addr=10.0.0.2, port=4420 00:22:58.270 [2024-11-04 07:29:00.008457] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61aa0 is same with the state(5) to be set 00:22:58.270 [2024-11-04 07:29:00.008637] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe61aa0 (9): Bad file descriptor 00:22:58.270 [2024-11-04 07:29:00.008677] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:58.270 [2024-11-04 07:29:00.008688] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:58.270 [2024-11-04 07:29:00.008697] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:58.270 [2024-11-04 07:29:00.008713] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.270 [2024-11-04 07:29:00.013179] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:58.270 [2024-11-04 07:29:00.013284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.270 [2024-11-04 07:29:00.013330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.270 [2024-11-04 07:29:00.013345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe4c760 with addr=10.0.0.3, port=4420 00:22:58.270 [2024-11-04 07:29:00.013354] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4c760 is same with the state(5) to be set 00:22:58.270 [2024-11-04 07:29:00.013368] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe4c760 (9): Bad file descriptor 00:22:58.270 [2024-11-04 07:29:00.013381] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:58.270 [2024-11-04 07:29:00.013388] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:58.270 [2024-11-04 07:29:00.013396] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:58.270 [2024-11-04 07:29:00.013409] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.270 [2024-11-04 07:29:00.017849] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:58.270 [2024-11-04 07:29:00.017971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.270 [2024-11-04 07:29:00.018021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.270 [2024-11-04 07:29:00.018036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe61aa0 with addr=10.0.0.2, port=4420 00:22:58.270 [2024-11-04 07:29:00.018045] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61aa0 is same with the state(5) to be set 00:22:58.270 [2024-11-04 07:29:00.018058] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe61aa0 (9): Bad file descriptor 00:22:58.270 [2024-11-04 07:29:00.018070] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:58.270 [2024-11-04 07:29:00.018078] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:58.270 [2024-11-04 07:29:00.018085] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:58.270 [2024-11-04 07:29:00.018098] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.270 [2024-11-04 07:29:00.023268] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:58.270 [2024-11-04 07:29:00.023425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.270 [2024-11-04 07:29:00.023487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.270 [2024-11-04 07:29:00.023534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe4c760 with addr=10.0.0.3, port=4420 00:22:58.270 [2024-11-04 07:29:00.023543] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4c760 is same with the state(5) to be set 00:22:58.270 [2024-11-04 07:29:00.023573] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe4c760 (9): Bad file descriptor 00:22:58.270 [2024-11-04 07:29:00.023601] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:58.270 [2024-11-04 07:29:00.023610] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:58.270 [2024-11-04 07:29:00.023618] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:58.270 [2024-11-04 07:29:00.023633] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.270 [2024-11-04 07:29:00.027932] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:58.270 [2024-11-04 07:29:00.028011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.270 [2024-11-04 07:29:00.028053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.270 [2024-11-04 07:29:00.028068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe61aa0 with addr=10.0.0.2, port=4420 00:22:58.270 [2024-11-04 07:29:00.028076] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61aa0 is same with the state(5) to be set 00:22:58.270 [2024-11-04 07:29:00.028090] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe61aa0 (9): Bad file descriptor 00:22:58.270 [2024-11-04 07:29:00.028102] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:58.270 [2024-11-04 07:29:00.028109] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:58.270 [2024-11-04 07:29:00.028116] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:58.270 [2024-11-04 07:29:00.028129] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.270 [2024-11-04 07:29:00.033335] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:58.270 [2024-11-04 07:29:00.033424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.270 [2024-11-04 07:29:00.033465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.270 [2024-11-04 07:29:00.033479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe4c760 with addr=10.0.0.3, port=4420 00:22:58.270 [2024-11-04 07:29:00.033488] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4c760 is same with the state(5) to be set 00:22:58.270 [2024-11-04 07:29:00.033501] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe4c760 (9): Bad file descriptor 00:22:58.270 [2024-11-04 07:29:00.033512] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:58.271 [2024-11-04 07:29:00.033519] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:58.271 [2024-11-04 07:29:00.033527] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:58.271 [2024-11-04 07:29:00.033540] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.271 [2024-11-04 07:29:00.037983] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:58.271 [2024-11-04 07:29:00.038244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.271 [2024-11-04 07:29:00.038298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.271 [2024-11-04 07:29:00.038314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe61aa0 with addr=10.0.0.2, port=4420 00:22:58.271 [2024-11-04 07:29:00.038324] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61aa0 is same with the state(5) to be set 00:22:58.271 [2024-11-04 07:29:00.038340] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe61aa0 (9): Bad file descriptor 00:22:58.271 [2024-11-04 07:29:00.038353] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:58.271 [2024-11-04 07:29:00.038361] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:58.271 [2024-11-04 07:29:00.038369] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:58.271 [2024-11-04 07:29:00.038384] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.271 [2024-11-04 07:29:00.043398] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:58.271 [2024-11-04 07:29:00.043489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.271 [2024-11-04 07:29:00.043532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.271 [2024-11-04 07:29:00.043547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe4c760 with addr=10.0.0.3, port=4420 00:22:58.271 [2024-11-04 07:29:00.043556] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4c760 is same with the state(5) to be set 00:22:58.271 [2024-11-04 07:29:00.043570] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe4c760 (9): Bad file descriptor 00:22:58.271 [2024-11-04 07:29:00.043582] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:58.271 [2024-11-04 07:29:00.043589] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:58.271 [2024-11-04 07:29:00.043596] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:58.271 [2024-11-04 07:29:00.043609] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.271 [2024-11-04 07:29:00.048205] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:58.271 [2024-11-04 07:29:00.048463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.271 [2024-11-04 07:29:00.048510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.271 [2024-11-04 07:29:00.048526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe61aa0 with addr=10.0.0.2, port=4420 00:22:58.271 [2024-11-04 07:29:00.048536] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61aa0 is same with the state(5) to be set 00:22:58.271 [2024-11-04 07:29:00.048552] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe61aa0 (9): Bad file descriptor 00:22:58.271 [2024-11-04 07:29:00.048565] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:58.271 [2024-11-04 07:29:00.048573] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:58.271 [2024-11-04 07:29:00.048582] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:58.271 [2024-11-04 07:29:00.048596] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.271 [2024-11-04 07:29:00.053461] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:58.271 [2024-11-04 07:29:00.053552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.271 [2024-11-04 07:29:00.053595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.271 [2024-11-04 07:29:00.053610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe4c760 with addr=10.0.0.3, port=4420 00:22:58.271 [2024-11-04 07:29:00.053618] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4c760 is same with the state(5) to be set 00:22:58.271 [2024-11-04 07:29:00.053632] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe4c760 (9): Bad file descriptor 00:22:58.271 [2024-11-04 07:29:00.053644] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:58.271 [2024-11-04 07:29:00.053651] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:58.271 [2024-11-04 07:29:00.053659] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:58.271 [2024-11-04 07:29:00.053671] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.271 [2024-11-04 07:29:00.056960] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:58.271 [2024-11-04 07:29:00.056987] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:58.271 [2024-11-04 07:29:00.057006] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:58.271 [2024-11-04 07:29:00.057043] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:22:58.271 [2024-11-04 07:29:00.057056] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:58.271 [2024-11-04 07:29:00.057069] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:58.529 [2024-11-04 07:29:00.143051] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:58.529 [2024-11-04 07:29:00.143100] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:59.464 07:29:00 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:22:59.464 07:29:00 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:59.464 07:29:00 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:59.464 07:29:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:59.464 07:29:00 -- host/mdns_discovery.sh@68 -- # sort 00:22:59.464 07:29:00 -- common/autotest_common.sh@10 -- # set +x 00:22:59.464 07:29:00 -- host/mdns_discovery.sh@68 -- # xargs 00:22:59.464 07:29:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:59.464 07:29:00 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:22:59.464 07:29:00 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:22:59.464 07:29:00 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:59.464 07:29:00 -- host/mdns_discovery.sh@64 -- # sort 00:22:59.464 07:29:00 -- host/mdns_discovery.sh@64 -- # xargs 00:22:59.464 07:29:00 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:59.464 07:29:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:59.464 07:29:00 -- common/autotest_common.sh@10 -- # set +x 00:22:59.464 07:29:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:59.464 07:29:01 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:22:59.464 07:29:01 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:22:59.464 07:29:01 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:59.464 07:29:01 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:22:59.464 07:29:01 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:59.464 07:29:01 -- host/mdns_discovery.sh@72 -- # xargs 00:22:59.464 07:29:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:59.464 07:29:01 -- common/autotest_common.sh@10 -- # set +x 00:22:59.464 07:29:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:59.464 07:29:01 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:22:59.464 07:29:01 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:22:59.464 07:29:01 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:22:59.464 07:29:01 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:59.464 07:29:01 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:59.464 07:29:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:59.464 07:29:01 -- common/autotest_common.sh@10 -- # set +x 00:22:59.464 07:29:01 -- host/mdns_discovery.sh@72 -- # xargs 00:22:59.464 07:29:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:59.464 07:29:01 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:22:59.464 07:29:01 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:22:59.464 07:29:01 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:22:59.464 07:29:01 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:22:59.464 07:29:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:59.464 07:29:01 -- common/autotest_common.sh@10 -- # set +x 00:22:59.464 07:29:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:59.464 07:29:01 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:22:59.464 07:29:01 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:22:59.464 07:29:01 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:22:59.464 07:29:01 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:22:59.464 07:29:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:59.464 07:29:01 -- common/autotest_common.sh@10 -- # set +x 00:22:59.464 07:29:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:59.464 07:29:01 -- host/mdns_discovery.sh@172 -- # sleep 1 00:22:59.464 [2024-11-04 07:29:01.269272] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:00.399 07:29:02 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:23:00.399 07:29:02 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:00.399 07:29:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:00.399 07:29:02 -- common/autotest_common.sh@10 -- # set +x 00:23:00.399 07:29:02 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:00.399 07:29:02 -- host/mdns_discovery.sh@80 -- # xargs 00:23:00.399 07:29:02 -- host/mdns_discovery.sh@80 -- # sort 00:23:00.399 07:29:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:00.658 07:29:02 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:23:00.658 07:29:02 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:23:00.658 07:29:02 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:00.658 07:29:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:00.658 07:29:02 -- common/autotest_common.sh@10 -- # set +x 00:23:00.658 07:29:02 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:00.658 07:29:02 -- host/mdns_discovery.sh@68 -- # sort 00:23:00.658 07:29:02 -- host/mdns_discovery.sh@68 -- # xargs 00:23:00.658 07:29:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:00.658 07:29:02 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:23:00.658 07:29:02 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:23:00.658 07:29:02 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:00.658 07:29:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:00.658 07:29:02 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:00.658 07:29:02 -- host/mdns_discovery.sh@64 -- # sort 00:23:00.658 07:29:02 -- common/autotest_common.sh@10 -- # set +x 00:23:00.658 07:29:02 -- host/mdns_discovery.sh@64 -- # xargs 00:23:00.658 07:29:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:00.658 07:29:02 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:23:00.658 07:29:02 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:23:00.658 07:29:02 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:00.658 07:29:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:00.658 07:29:02 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:00.658 07:29:02 -- common/autotest_common.sh@10 -- # set +x 00:23:00.658 07:29:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:00.658 07:29:02 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:23:00.658 07:29:02 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:23:00.658 07:29:02 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:23:00.658 07:29:02 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:00.658 07:29:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:00.658 07:29:02 -- common/autotest_common.sh@10 -- # set +x 00:23:00.658 07:29:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:00.658 07:29:02 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:00.658 07:29:02 -- common/autotest_common.sh@640 -- # local es=0 00:23:00.658 07:29:02 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:00.658 07:29:02 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:23:00.658 07:29:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:00.658 07:29:02 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:23:00.658 07:29:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:00.658 07:29:02 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:00.658 07:29:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:00.658 07:29:02 -- common/autotest_common.sh@10 -- # set +x 00:23:00.658 [2024-11-04 07:29:02.449254] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:23:00.658 2024/11/04 07:29:02 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:00.658 request: 00:23:00.658 { 00:23:00.658 "method": "bdev_nvme_start_mdns_discovery", 00:23:00.658 "params": { 00:23:00.658 "name": "mdns", 00:23:00.658 "svcname": "_nvme-disc._http", 00:23:00.658 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:00.658 } 00:23:00.658 } 00:23:00.658 Got JSON-RPC error response 00:23:00.658 GoRPCClient: error on JSON-RPC call 00:23:00.658 07:29:02 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:23:00.658 07:29:02 -- common/autotest_common.sh@643 -- # es=1 00:23:00.658 07:29:02 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:00.658 07:29:02 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:00.658 07:29:02 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:00.658 07:29:02 -- host/mdns_discovery.sh@183 -- # sleep 5 00:23:01.231 [2024-11-04 07:29:02.837832] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:01.231 [2024-11-04 07:29:02.937831] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:01.231 [2024-11-04 07:29:03.037836] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:01.231 [2024-11-04 07:29:03.038002] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:23:01.231 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:01.231 cookie is 0 00:23:01.231 is_local: 1 00:23:01.231 our_own: 0 00:23:01.231 wide_area: 0 00:23:01.231 multicast: 1 00:23:01.231 cached: 1 00:23:01.489 [2024-11-04 07:29:03.137835] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:01.489 [2024-11-04 07:29:03.138076] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:23:01.489 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:01.489 cookie is 0 00:23:01.489 is_local: 1 00:23:01.489 our_own: 0 00:23:01.489 wide_area: 0 00:23:01.489 multicast: 1 00:23:01.489 cached: 1 00:23:02.425 [2024-11-04 07:29:04.048276] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:02.425 [2024-11-04 07:29:04.048415] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:02.425 [2024-11-04 07:29:04.048446] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:02.425 [2024-11-04 07:29:04.134363] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:23:02.425 [2024-11-04 07:29:04.148202] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:02.425 [2024-11-04 07:29:04.148220] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:02.425 [2024-11-04 07:29:04.148235] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:02.425 [2024-11-04 07:29:04.201172] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:02.425 [2024-11-04 07:29:04.201195] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:02.425 [2024-11-04 07:29:04.234267] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:23:02.683 [2024-11-04 07:29:04.292966] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:02.683 [2024-11-04 07:29:04.292988] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:05.969 07:29:07 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:23:05.969 07:29:07 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:05.969 07:29:07 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:05.969 07:29:07 -- host/mdns_discovery.sh@80 -- # sort 00:23:05.969 07:29:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:05.969 07:29:07 -- host/mdns_discovery.sh@80 -- # xargs 00:23:05.969 07:29:07 -- common/autotest_common.sh@10 -- # set +x 00:23:05.969 07:29:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:05.969 07:29:07 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:23:05.969 07:29:07 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:23:05.969 07:29:07 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:05.969 07:29:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:05.969 07:29:07 -- common/autotest_common.sh@10 -- # set +x 00:23:05.969 07:29:07 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:05.969 07:29:07 -- host/mdns_discovery.sh@76 -- # xargs 00:23:05.969 07:29:07 -- host/mdns_discovery.sh@76 -- # sort 00:23:05.969 07:29:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:05.969 07:29:07 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:05.969 07:29:07 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:23:05.969 07:29:07 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:05.969 07:29:07 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:05.969 07:29:07 -- host/mdns_discovery.sh@64 -- # sort 00:23:05.969 07:29:07 -- host/mdns_discovery.sh@64 -- # xargs 00:23:05.969 07:29:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:05.969 07:29:07 -- common/autotest_common.sh@10 -- # set +x 00:23:05.969 07:29:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:05.969 07:29:07 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:05.969 07:29:07 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:05.969 07:29:07 -- common/autotest_common.sh@640 -- # local es=0 00:23:05.969 07:29:07 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:05.969 07:29:07 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:23:05.969 07:29:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:05.969 07:29:07 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:23:05.969 07:29:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:05.969 07:29:07 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:05.969 07:29:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:05.969 07:29:07 -- common/autotest_common.sh@10 -- # set +x 00:23:05.969 [2024-11-04 07:29:07.628244] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:23:05.969 2024/11/04 07:29:07 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:05.969 request: 00:23:05.969 { 00:23:05.969 "method": "bdev_nvme_start_mdns_discovery", 00:23:05.969 "params": { 00:23:05.969 "name": "cdc", 00:23:05.969 "svcname": "_nvme-disc._tcp", 00:23:05.969 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:05.969 } 00:23:05.969 } 00:23:05.969 Got JSON-RPC error response 00:23:05.969 GoRPCClient: error on JSON-RPC call 00:23:05.969 07:29:07 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:23:05.969 07:29:07 -- common/autotest_common.sh@643 -- # es=1 00:23:05.969 07:29:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:05.969 07:29:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:05.969 07:29:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:05.969 07:29:07 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:23:05.969 07:29:07 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:05.969 07:29:07 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:05.969 07:29:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:05.969 07:29:07 -- common/autotest_common.sh@10 -- # set +x 00:23:05.969 07:29:07 -- host/mdns_discovery.sh@76 -- # sort 00:23:05.969 07:29:07 -- host/mdns_discovery.sh@76 -- # xargs 00:23:05.969 07:29:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:05.969 07:29:07 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:05.969 07:29:07 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:23:05.969 07:29:07 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:05.969 07:29:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:05.969 07:29:07 -- common/autotest_common.sh@10 -- # set +x 00:23:05.969 07:29:07 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:05.969 07:29:07 -- host/mdns_discovery.sh@64 -- # xargs 00:23:05.969 07:29:07 -- host/mdns_discovery.sh@64 -- # sort 00:23:05.969 07:29:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:05.969 07:29:07 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:05.969 07:29:07 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:05.969 07:29:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:05.969 07:29:07 -- common/autotest_common.sh@10 -- # set +x 00:23:05.969 07:29:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:05.969 07:29:07 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:23:05.969 07:29:07 -- host/mdns_discovery.sh@197 -- # kill 97977 00:23:05.969 07:29:07 -- host/mdns_discovery.sh@200 -- # wait 97977 00:23:06.228 [2024-11-04 07:29:07.851109] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:06.228 07:29:07 -- host/mdns_discovery.sh@201 -- # kill 98065 00:23:06.228 Got SIGTERM, quitting. 00:23:06.228 07:29:07 -- host/mdns_discovery.sh@202 -- # kill 98007 00:23:06.228 07:29:07 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:23:06.228 07:29:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:06.228 07:29:07 -- nvmf/common.sh@116 -- # sync 00:23:06.228 Got SIGTERM, quitting. 00:23:06.228 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:23:06.228 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:23:06.228 avahi-daemon 0.8 exiting. 00:23:06.228 07:29:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:06.228 07:29:08 -- nvmf/common.sh@119 -- # set +e 00:23:06.228 07:29:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:06.228 07:29:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:06.228 rmmod nvme_tcp 00:23:06.228 rmmod nvme_fabrics 00:23:06.228 rmmod nvme_keyring 00:23:06.228 07:29:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:06.228 07:29:08 -- nvmf/common.sh@123 -- # set -e 00:23:06.228 07:29:08 -- nvmf/common.sh@124 -- # return 0 00:23:06.228 07:29:08 -- nvmf/common.sh@477 -- # '[' -n 97942 ']' 00:23:06.228 07:29:08 -- nvmf/common.sh@478 -- # killprocess 97942 00:23:06.228 07:29:08 -- common/autotest_common.sh@926 -- # '[' -z 97942 ']' 00:23:06.228 07:29:08 -- common/autotest_common.sh@930 -- # kill -0 97942 00:23:06.228 07:29:08 -- common/autotest_common.sh@931 -- # uname 00:23:06.228 07:29:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:06.228 07:29:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97942 00:23:06.487 07:29:08 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:06.487 07:29:08 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:06.487 killing process with pid 97942 00:23:06.487 07:29:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97942' 00:23:06.487 07:29:08 -- common/autotest_common.sh@945 -- # kill 97942 00:23:06.487 07:29:08 -- common/autotest_common.sh@950 -- # wait 97942 00:23:06.746 07:29:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:06.746 07:29:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:06.746 07:29:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:06.746 07:29:08 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:06.746 07:29:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:06.746 07:29:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.746 07:29:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:06.746 07:29:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.746 07:29:08 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:06.746 00:23:06.746 real 0m19.897s 00:23:06.746 user 0m39.506s 00:23:06.746 sys 0m1.906s 00:23:06.746 07:29:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:06.746 07:29:08 -- common/autotest_common.sh@10 -- # set +x 00:23:06.746 ************************************ 00:23:06.746 END TEST nvmf_mdns_discovery 00:23:06.746 ************************************ 00:23:06.746 07:29:08 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:23:06.746 07:29:08 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:06.746 07:29:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:06.746 07:29:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:06.746 07:29:08 -- common/autotest_common.sh@10 -- # set +x 00:23:06.746 ************************************ 00:23:06.746 START TEST nvmf_multipath 00:23:06.746 ************************************ 00:23:06.746 07:29:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:06.746 * Looking for test storage... 00:23:06.746 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:06.747 07:29:08 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:06.747 07:29:08 -- nvmf/common.sh@7 -- # uname -s 00:23:06.747 07:29:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:06.747 07:29:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:06.747 07:29:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:06.747 07:29:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:06.747 07:29:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:06.747 07:29:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:06.747 07:29:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:06.747 07:29:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:06.747 07:29:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:06.747 07:29:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:06.747 07:29:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:23:06.747 07:29:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:23:06.747 07:29:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:06.747 07:29:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:06.747 07:29:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:06.747 07:29:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:06.747 07:29:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:06.747 07:29:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:06.747 07:29:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:06.747 07:29:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.747 07:29:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.747 07:29:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.747 07:29:08 -- paths/export.sh@5 -- # export PATH 00:23:06.747 07:29:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.747 07:29:08 -- nvmf/common.sh@46 -- # : 0 00:23:06.747 07:29:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:06.747 07:29:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:06.747 07:29:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:06.747 07:29:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:06.747 07:29:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:06.747 07:29:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:06.747 07:29:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:06.747 07:29:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:06.747 07:29:08 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:06.747 07:29:08 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:06.747 07:29:08 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:06.747 07:29:08 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:06.747 07:29:08 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:06.747 07:29:08 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:06.747 07:29:08 -- host/multipath.sh@30 -- # nvmftestinit 00:23:06.747 07:29:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:06.747 07:29:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:06.747 07:29:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:06.747 07:29:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:06.747 07:29:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:06.747 07:29:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.747 07:29:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:06.747 07:29:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.747 07:29:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:06.747 07:29:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:06.747 07:29:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:06.747 07:29:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:06.747 07:29:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:06.747 07:29:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:06.747 07:29:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:06.747 07:29:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:06.747 07:29:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:06.747 07:29:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:06.747 07:29:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:06.747 07:29:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:06.747 07:29:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:06.747 07:29:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:06.747 07:29:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:06.747 07:29:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:06.747 07:29:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:06.747 07:29:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:06.747 07:29:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:06.747 07:29:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:07.006 Cannot find device "nvmf_tgt_br" 00:23:07.006 07:29:08 -- nvmf/common.sh@154 -- # true 00:23:07.006 07:29:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:07.006 Cannot find device "nvmf_tgt_br2" 00:23:07.006 07:29:08 -- nvmf/common.sh@155 -- # true 00:23:07.006 07:29:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:07.006 07:29:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:07.006 Cannot find device "nvmf_tgt_br" 00:23:07.006 07:29:08 -- nvmf/common.sh@157 -- # true 00:23:07.006 07:29:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:07.006 Cannot find device "nvmf_tgt_br2" 00:23:07.006 07:29:08 -- nvmf/common.sh@158 -- # true 00:23:07.006 07:29:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:07.006 07:29:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:07.006 07:29:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:07.006 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:07.006 07:29:08 -- nvmf/common.sh@161 -- # true 00:23:07.006 07:29:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:07.006 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:07.006 07:29:08 -- nvmf/common.sh@162 -- # true 00:23:07.006 07:29:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:07.006 07:29:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:07.006 07:29:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:07.006 07:29:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:07.006 07:29:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:07.006 07:29:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:07.006 07:29:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:07.006 07:29:08 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:07.006 07:29:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:07.006 07:29:08 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:07.006 07:29:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:07.006 07:29:08 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:07.006 07:29:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:07.006 07:29:08 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:07.006 07:29:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:07.006 07:29:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:07.006 07:29:08 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:07.006 07:29:08 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:07.006 07:29:08 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:07.006 07:29:08 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:07.006 07:29:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:07.265 07:29:08 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:07.265 07:29:08 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:07.265 07:29:08 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:07.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:07.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:23:07.265 00:23:07.265 --- 10.0.0.2 ping statistics --- 00:23:07.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.265 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:23:07.265 07:29:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:07.265 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:07.265 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:23:07.265 00:23:07.265 --- 10.0.0.3 ping statistics --- 00:23:07.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.265 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:23:07.265 07:29:08 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:07.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:07.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:23:07.265 00:23:07.265 --- 10.0.0.1 ping statistics --- 00:23:07.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.265 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:23:07.265 07:29:08 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:07.265 07:29:08 -- nvmf/common.sh@421 -- # return 0 00:23:07.265 07:29:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:07.265 07:29:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:07.265 07:29:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:07.265 07:29:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:07.265 07:29:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:07.265 07:29:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:07.265 07:29:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:07.265 07:29:08 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:23:07.265 07:29:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:07.265 07:29:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:07.265 07:29:08 -- common/autotest_common.sh@10 -- # set +x 00:23:07.265 07:29:08 -- nvmf/common.sh@469 -- # nvmfpid=98576 00:23:07.265 07:29:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:07.265 07:29:08 -- nvmf/common.sh@470 -- # waitforlisten 98576 00:23:07.265 07:29:08 -- common/autotest_common.sh@819 -- # '[' -z 98576 ']' 00:23:07.265 07:29:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.265 07:29:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:07.265 07:29:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.265 07:29:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:07.265 07:29:08 -- common/autotest_common.sh@10 -- # set +x 00:23:07.265 [2024-11-04 07:29:08.956260] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:23:07.265 [2024-11-04 07:29:08.956326] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:07.265 [2024-11-04 07:29:09.092629] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:07.524 [2024-11-04 07:29:09.161452] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:07.524 [2024-11-04 07:29:09.161651] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:07.524 [2024-11-04 07:29:09.161668] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:07.524 [2024-11-04 07:29:09.161680] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:07.524 [2024-11-04 07:29:09.161822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.524 [2024-11-04 07:29:09.161844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.460 07:29:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:08.460 07:29:09 -- common/autotest_common.sh@852 -- # return 0 00:23:08.460 07:29:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:08.460 07:29:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:08.460 07:29:09 -- common/autotest_common.sh@10 -- # set +x 00:23:08.460 07:29:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:08.460 07:29:10 -- host/multipath.sh@33 -- # nvmfapp_pid=98576 00:23:08.460 07:29:10 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:08.719 [2024-11-04 07:29:10.316066] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:08.719 07:29:10 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:08.978 Malloc0 00:23:08.978 07:29:10 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:09.239 07:29:10 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:09.513 07:29:11 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:09.513 [2024-11-04 07:29:11.297656] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.513 07:29:11 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:09.785 [2024-11-04 07:29:11.505801] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:09.785 07:29:11 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:09.785 07:29:11 -- host/multipath.sh@44 -- # bdevperf_pid=98680 00:23:09.785 07:29:11 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:09.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:09.785 07:29:11 -- host/multipath.sh@47 -- # waitforlisten 98680 /var/tmp/bdevperf.sock 00:23:09.785 07:29:11 -- common/autotest_common.sh@819 -- # '[' -z 98680 ']' 00:23:09.785 07:29:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:09.785 07:29:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:09.785 07:29:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:09.785 07:29:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:09.785 07:29:11 -- common/autotest_common.sh@10 -- # set +x 00:23:10.721 07:29:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:10.721 07:29:12 -- common/autotest_common.sh@852 -- # return 0 00:23:10.721 07:29:12 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:10.980 07:29:12 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:11.547 Nvme0n1 00:23:11.547 07:29:13 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:11.806 Nvme0n1 00:23:11.806 07:29:13 -- host/multipath.sh@78 -- # sleep 1 00:23:11.806 07:29:13 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:12.742 07:29:14 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:23:12.742 07:29:14 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:13.001 07:29:14 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:13.259 07:29:14 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:23:13.259 07:29:14 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98576 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:13.259 07:29:14 -- host/multipath.sh@65 -- # dtrace_pid=98767 00:23:13.259 07:29:14 -- host/multipath.sh@66 -- # sleep 6 00:23:19.823 07:29:21 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:19.823 07:29:21 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:19.823 07:29:21 -- host/multipath.sh@67 -- # active_port=4421 00:23:19.823 07:29:21 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:19.823 Attaching 4 probes... 00:23:19.823 @path[10.0.0.2, 4421]: 21410 00:23:19.823 @path[10.0.0.2, 4421]: 21810 00:23:19.823 @path[10.0.0.2, 4421]: 21783 00:23:19.823 @path[10.0.0.2, 4421]: 21848 00:23:19.823 @path[10.0.0.2, 4421]: 21633 00:23:19.823 07:29:21 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:19.823 07:29:21 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:19.823 07:29:21 -- host/multipath.sh@69 -- # sed -n 1p 00:23:19.823 07:29:21 -- host/multipath.sh@69 -- # port=4421 00:23:19.823 07:29:21 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:19.823 07:29:21 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:19.823 07:29:21 -- host/multipath.sh@72 -- # kill 98767 00:23:19.823 07:29:21 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:19.823 07:29:21 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:23:19.823 07:29:21 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:19.823 07:29:21 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:20.082 07:29:21 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:23:20.082 07:29:21 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98576 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:20.082 07:29:21 -- host/multipath.sh@65 -- # dtrace_pid=98905 00:23:20.082 07:29:21 -- host/multipath.sh@66 -- # sleep 6 00:23:26.647 07:29:27 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:26.647 07:29:27 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:26.647 07:29:28 -- host/multipath.sh@67 -- # active_port=4420 00:23:26.647 07:29:28 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:26.647 Attaching 4 probes... 00:23:26.647 @path[10.0.0.2, 4420]: 21254 00:23:26.647 @path[10.0.0.2, 4420]: 21706 00:23:26.647 @path[10.0.0.2, 4420]: 21839 00:23:26.647 @path[10.0.0.2, 4420]: 21710 00:23:26.647 @path[10.0.0.2, 4420]: 21705 00:23:26.647 07:29:28 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:26.647 07:29:28 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:26.647 07:29:28 -- host/multipath.sh@69 -- # sed -n 1p 00:23:26.647 07:29:28 -- host/multipath.sh@69 -- # port=4420 00:23:26.647 07:29:28 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:26.647 07:29:28 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:26.647 07:29:28 -- host/multipath.sh@72 -- # kill 98905 00:23:26.647 07:29:28 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:26.647 07:29:28 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:23:26.647 07:29:28 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:26.647 07:29:28 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:26.906 07:29:28 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:23:26.906 07:29:28 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98576 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:26.906 07:29:28 -- host/multipath.sh@65 -- # dtrace_pid=99032 00:23:26.906 07:29:28 -- host/multipath.sh@66 -- # sleep 6 00:23:33.471 07:29:34 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:33.471 07:29:34 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:33.471 07:29:34 -- host/multipath.sh@67 -- # active_port=4421 00:23:33.471 07:29:34 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:33.471 Attaching 4 probes... 00:23:33.471 @path[10.0.0.2, 4421]: 14905 00:23:33.471 @path[10.0.0.2, 4421]: 21508 00:23:33.471 @path[10.0.0.2, 4421]: 21543 00:23:33.471 @path[10.0.0.2, 4421]: 21603 00:23:33.471 @path[10.0.0.2, 4421]: 21578 00:23:33.471 07:29:34 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:33.471 07:29:34 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:33.471 07:29:34 -- host/multipath.sh@69 -- # sed -n 1p 00:23:33.471 07:29:34 -- host/multipath.sh@69 -- # port=4421 00:23:33.471 07:29:34 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:33.471 07:29:34 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:33.471 07:29:34 -- host/multipath.sh@72 -- # kill 99032 00:23:33.471 07:29:34 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:33.471 07:29:34 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:23:33.471 07:29:34 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:33.471 07:29:35 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:33.730 07:29:35 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:23:33.730 07:29:35 -- host/multipath.sh@65 -- # dtrace_pid=99168 00:23:33.730 07:29:35 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98576 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:33.730 07:29:35 -- host/multipath.sh@66 -- # sleep 6 00:23:40.354 07:29:41 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:40.354 07:29:41 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:23:40.354 07:29:41 -- host/multipath.sh@67 -- # active_port= 00:23:40.354 07:29:41 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:40.354 Attaching 4 probes... 00:23:40.354 00:23:40.354 00:23:40.354 00:23:40.354 00:23:40.354 00:23:40.354 07:29:41 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:40.354 07:29:41 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:40.354 07:29:41 -- host/multipath.sh@69 -- # sed -n 1p 00:23:40.354 07:29:41 -- host/multipath.sh@69 -- # port= 00:23:40.354 07:29:41 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:23:40.354 07:29:41 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:23:40.354 07:29:41 -- host/multipath.sh@72 -- # kill 99168 00:23:40.354 07:29:41 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:40.354 07:29:41 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:23:40.354 07:29:41 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:40.354 07:29:41 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:40.354 07:29:42 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:23:40.354 07:29:42 -- host/multipath.sh@65 -- # dtrace_pid=99299 00:23:40.354 07:29:42 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98576 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:40.354 07:29:42 -- host/multipath.sh@66 -- # sleep 6 00:23:46.918 07:29:48 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:46.918 07:29:48 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:46.918 07:29:48 -- host/multipath.sh@67 -- # active_port=4421 00:23:46.918 07:29:48 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:46.918 Attaching 4 probes... 00:23:46.918 @path[10.0.0.2, 4421]: 20917 00:23:46.918 @path[10.0.0.2, 4421]: 21283 00:23:46.918 @path[10.0.0.2, 4421]: 21232 00:23:46.918 @path[10.0.0.2, 4421]: 21181 00:23:46.918 @path[10.0.0.2, 4421]: 21325 00:23:46.919 07:29:48 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:46.919 07:29:48 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:46.919 07:29:48 -- host/multipath.sh@69 -- # sed -n 1p 00:23:46.919 07:29:48 -- host/multipath.sh@69 -- # port=4421 00:23:46.919 07:29:48 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:46.919 07:29:48 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:46.919 07:29:48 -- host/multipath.sh@72 -- # kill 99299 00:23:46.919 07:29:48 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:46.919 07:29:48 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:46.919 [2024-11-04 07:29:48.743869] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.743984] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.743995] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744003] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744010] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744018] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744033] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744041] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744049] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744063] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744070] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744078] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744085] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744092] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744099] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744106] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744113] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744120] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744127] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744142] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744155] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744169] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744184] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744192] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744206] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744213] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744229] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744264] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744277] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744291] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744344] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744351] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744370] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744377] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744383] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744403] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744417] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744424] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744430] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744438] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744454] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744463] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744471] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744478] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744498] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744505] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744512] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744519] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744526] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744533] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.919 [2024-11-04 07:29:48.744540] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.920 [2024-11-04 07:29:48.744546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.920 [2024-11-04 07:29:48.744553] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.920 [2024-11-04 07:29:48.744559] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.920 [2024-11-04 07:29:48.744567] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.920 [2024-11-04 07:29:48.744574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.920 [2024-11-04 07:29:48.744580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.920 [2024-11-04 07:29:48.744589] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.920 [2024-11-04 07:29:48.744597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.920 [2024-11-04 07:29:48.744604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.920 [2024-11-04 07:29:48.744611] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.920 [2024-11-04 07:29:48.744618] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.920 [2024-11-04 07:29:48.744625] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.920 [2024-11-04 07:29:48.744632] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.920 [2024-11-04 07:29:48.744639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.920 [2024-11-04 07:29:48.744646] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.920 [2024-11-04 07:29:48.744653] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.920 [2024-11-04 07:29:48.744661] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.920 [2024-11-04 07:29:48.744669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.920 [2024-11-04 07:29:48.744676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.920 [2024-11-04 07:29:48.744683] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.920 [2024-11-04 07:29:48.744690] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.920 [2024-11-04 07:29:48.744697] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.920 [2024-11-04 07:29:48.744704] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.920 [2024-11-04 07:29:48.744711] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:46.920 [2024-11-04 07:29:48.744718] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cb370 is same with the state(5) to be set 00:23:47.179 07:29:48 -- host/multipath.sh@101 -- # sleep 1 00:23:48.114 07:29:49 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:23:48.115 07:29:49 -- host/multipath.sh@65 -- # dtrace_pid=99434 00:23:48.115 07:29:49 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98576 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:48.115 07:29:49 -- host/multipath.sh@66 -- # sleep 6 00:23:54.677 07:29:55 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:54.677 07:29:55 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:54.678 07:29:56 -- host/multipath.sh@67 -- # active_port=4420 00:23:54.678 07:29:56 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:54.678 Attaching 4 probes... 00:23:54.678 @path[10.0.0.2, 4420]: 20489 00:23:54.678 @path[10.0.0.2, 4420]: 20941 00:23:54.678 @path[10.0.0.2, 4420]: 21004 00:23:54.678 @path[10.0.0.2, 4420]: 21170 00:23:54.678 @path[10.0.0.2, 4420]: 21004 00:23:54.678 07:29:56 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:54.678 07:29:56 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:54.678 07:29:56 -- host/multipath.sh@69 -- # sed -n 1p 00:23:54.678 07:29:56 -- host/multipath.sh@69 -- # port=4420 00:23:54.678 07:29:56 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:54.678 07:29:56 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:54.678 07:29:56 -- host/multipath.sh@72 -- # kill 99434 00:23:54.678 07:29:56 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:54.678 07:29:56 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:54.678 [2024-11-04 07:29:56.275933] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:54.678 07:29:56 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:54.678 07:29:56 -- host/multipath.sh@111 -- # sleep 6 00:24:01.258 07:30:02 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:24:01.258 07:30:02 -- host/multipath.sh@65 -- # dtrace_pid=99627 00:24:01.258 07:30:02 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98576 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:01.258 07:30:02 -- host/multipath.sh@66 -- # sleep 6 00:24:07.828 07:30:08 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:07.828 07:30:08 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:07.828 07:30:08 -- host/multipath.sh@67 -- # active_port=4421 00:24:07.828 07:30:08 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:07.828 Attaching 4 probes... 00:24:07.828 @path[10.0.0.2, 4421]: 20486 00:24:07.828 @path[10.0.0.2, 4421]: 21006 00:24:07.828 @path[10.0.0.2, 4421]: 20963 00:24:07.828 @path[10.0.0.2, 4421]: 20972 00:24:07.828 @path[10.0.0.2, 4421]: 21134 00:24:07.828 07:30:08 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:07.828 07:30:08 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:07.828 07:30:08 -- host/multipath.sh@69 -- # sed -n 1p 00:24:07.828 07:30:08 -- host/multipath.sh@69 -- # port=4421 00:24:07.828 07:30:08 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:07.828 07:30:08 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:07.828 07:30:08 -- host/multipath.sh@72 -- # kill 99627 00:24:07.828 07:30:08 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:07.828 07:30:08 -- host/multipath.sh@114 -- # killprocess 98680 00:24:07.828 07:30:08 -- common/autotest_common.sh@926 -- # '[' -z 98680 ']' 00:24:07.828 07:30:08 -- common/autotest_common.sh@930 -- # kill -0 98680 00:24:07.828 07:30:08 -- common/autotest_common.sh@931 -- # uname 00:24:07.828 07:30:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:07.828 07:30:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 98680 00:24:07.828 killing process with pid 98680 00:24:07.828 07:30:08 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:24:07.828 07:30:08 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:24:07.828 07:30:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 98680' 00:24:07.828 07:30:08 -- common/autotest_common.sh@945 -- # kill 98680 00:24:07.828 07:30:08 -- common/autotest_common.sh@950 -- # wait 98680 00:24:07.828 Connection closed with partial response: 00:24:07.828 00:24:07.828 00:24:07.828 07:30:09 -- host/multipath.sh@116 -- # wait 98680 00:24:07.828 07:30:09 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:07.828 [2024-11-04 07:29:11.580177] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:24:07.828 [2024-11-04 07:29:11.580307] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98680 ] 00:24:07.828 [2024-11-04 07:29:11.723150] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.828 [2024-11-04 07:29:11.801547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:07.828 Running I/O for 90 seconds... 00:24:07.828 [2024-11-04 07:29:21.820041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:49032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.828 [2024-11-04 07:29:21.820137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:07.828 [2024-11-04 07:29:21.820183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.828 [2024-11-04 07:29:21.820202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:07.828 [2024-11-04 07:29:21.820223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:49048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.828 [2024-11-04 07:29:21.820236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:07.828 [2024-11-04 07:29:21.820255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:49056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.828 [2024-11-04 07:29:21.820268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:07.828 [2024-11-04 07:29:21.820287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:49064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.828 [2024-11-04 07:29:21.820300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:07.828 [2024-11-04 07:29:21.820318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:49072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.828 [2024-11-04 07:29:21.820332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:07.828 [2024-11-04 07:29:21.820350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.828 [2024-11-04 07:29:21.820365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:07.828 [2024-11-04 07:29:21.820383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:49088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.828 [2024-11-04 07:29:21.820412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:07.828 [2024-11-04 07:29:21.820431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:48456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.828 [2024-11-04 07:29:21.820445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:07.828 [2024-11-04 07:29:21.820465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:48464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.829 [2024-11-04 07:29:21.820479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:07.829 [2024-11-04 07:29:21.820498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:48488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.829 [2024-11-04 07:29:21.820537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:07.829 [2024-11-04 07:29:21.820558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:48504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.829 [2024-11-04 07:29:21.820572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:07.829 [2024-11-04 07:29:21.820591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:48536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.829 [2024-11-04 07:29:21.820604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:07.829 [2024-11-04 07:29:21.820622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:48544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.829 [2024-11-04 07:29:21.820636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:07.829 [2024-11-04 07:29:21.820654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:48560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.829 [2024-11-04 07:29:21.820668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:07.829 [2024-11-04 07:29:21.820687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:48576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.829 [2024-11-04 07:29:21.820701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:07.829 [2024-11-04 07:29:21.820734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:48584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.829 [2024-11-04 07:29:21.820748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:07.829 [2024-11-04 07:29:21.820767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:48592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.829 [2024-11-04 07:29:21.820781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:07.829 [2024-11-04 07:29:21.820799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:48616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.829 [2024-11-04 07:29:21.820813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:07.829 [2024-11-04 07:29:21.821279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:48632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.829 [2024-11-04 07:29:21.821312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:07.829 [2024-11-04 07:29:21.821335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:48640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.829 [2024-11-04 07:29:21.821355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:07.829 [2024-11-04 07:29:21.821374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:48648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.829 [2024-11-04 07:29:21.821388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:07.829 [2024-11-04 07:29:21.821406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:48656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.829 [2024-11-04 07:29:21.821419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:07.829 [2024-11-04 07:29:21.821448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:48664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.829 [2024-11-04 07:29:21.821463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:07.829 [2024-11-04 07:29:21.821481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:49096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.829 [2024-11-04 07:29:21.821494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:07.829 [2024-11-04 07:29:21.821512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.829 [2024-11-04 07:29:21.821526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:07.829 [2024-11-04 07:29:21.821545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:49112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.829 [2024-11-04 07:29:21.821559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:07.829 [2024-11-04 07:29:21.821577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:49120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.829 [2024-11-04 07:29:21.821590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:07.829 [2024-11-04 07:29:21.821608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.829 [2024-11-04 07:29:21.821622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:07.829 [2024-11-04 07:29:21.821640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:49136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.829 [2024-11-04 07:29:21.821653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:07.829 [2024-11-04 07:29:21.821671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:49144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.829 [2024-11-04 07:29:21.821685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:07.829 [2024-11-04 07:29:21.821704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:49152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.829 [2024-11-04 07:29:21.821721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:07.829 [2024-11-04 07:29:21.821739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:49160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.829 [2024-11-04 07:29:21.821753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:07.829 [2024-11-04 07:29:21.821771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.829 [2024-11-04 07:29:21.821784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:07.829 [2024-11-04 07:29:21.821803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:49176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.829 [2024-11-04 07:29:21.821816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:07.829 [2024-11-04 07:29:21.821841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.829 [2024-11-04 07:29:21.821855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:07.829 [2024-11-04 07:29:21.821886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.829 [2024-11-04 07:29:21.821903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:07.829 [2024-11-04 07:29:21.821925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.829 [2024-11-04 07:29:21.821939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:07.829 [2024-11-04 07:29:21.821957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:49208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.829 [2024-11-04 07:29:21.821970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:07.829 [2024-11-04 07:29:21.821989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.829 [2024-11-04 07:29:21.822002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:07.829 [2024-11-04 07:29:21.822020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.829 [2024-11-04 07:29:21.822034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:07.829 [2024-11-04 07:29:21.822051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:49232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.829 [2024-11-04 07:29:21.822065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:07.829 [2024-11-04 07:29:21.822086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.829 [2024-11-04 07:29:21.822100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:07.829 [2024-11-04 07:29:21.822118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.829 [2024-11-04 07:29:21.822131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:07.829 [2024-11-04 07:29:21.822149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:49256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.829 [2024-11-04 07:29:21.822163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:07.829 [2024-11-04 07:29:21.822181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:49264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.829 [2024-11-04 07:29:21.822195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:07.829 [2024-11-04 07:29:21.822214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.829 [2024-11-04 07:29:21.822227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:07.829 [2024-11-04 07:29:21.822246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.829 [2024-11-04 07:29:21.822299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:07.830 [2024-11-04 07:29:21.822325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:49288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.830 [2024-11-04 07:29:21.822343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:07.830 [2024-11-04 07:29:21.822364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.830 [2024-11-04 07:29:21.822378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:07.830 [2024-11-04 07:29:21.822396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:49304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.830 [2024-11-04 07:29:21.822409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:07.830 [2024-11-04 07:29:21.822427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:49312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.830 [2024-11-04 07:29:21.822441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:07.830 [2024-11-04 07:29:21.822459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:49320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.830 [2024-11-04 07:29:21.822473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:07.830 [2024-11-04 07:29:21.822491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:49328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.830 [2024-11-04 07:29:21.822505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:07.830 [2024-11-04 07:29:21.822524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:49336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.830 [2024-11-04 07:29:21.822537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:07.830 [2024-11-04 07:29:21.822555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.830 [2024-11-04 07:29:21.822568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:07.830 [2024-11-04 07:29:21.822596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.830 [2024-11-04 07:29:21.822620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:07.830 [2024-11-04 07:29:21.822638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.830 [2024-11-04 07:29:21.822652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:07.830 [2024-11-04 07:29:21.822671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.830 [2024-11-04 07:29:21.822684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:07.830 [2024-11-04 07:29:21.822702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.830 [2024-11-04 07:29:21.822723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:07.830 [2024-11-04 07:29:21.822744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.830 [2024-11-04 07:29:21.822758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:07.830 [2024-11-04 07:29:21.822777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.830 [2024-11-04 07:29:21.822790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:07.830 [2024-11-04 07:29:21.822808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.830 [2024-11-04 07:29:21.822821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:07.830 [2024-11-04 07:29:21.822840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.830 [2024-11-04 07:29:21.822854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:07.830 [2024-11-04 07:29:21.822883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.830 [2024-11-04 07:29:21.822901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:07.830 [2024-11-04 07:29:21.822921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.830 [2024-11-04 07:29:21.822935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:07.830 [2024-11-04 07:29:21.822953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:49432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.830 [2024-11-04 07:29:21.822968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:07.830 [2024-11-04 07:29:21.822986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:48672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.830 [2024-11-04 07:29:21.822999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:07.830 [2024-11-04 07:29:21.823024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:48688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.830 [2024-11-04 07:29:21.823038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:07.830 [2024-11-04 07:29:21.823056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:48696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.830 [2024-11-04 07:29:21.823069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:07.830 [2024-11-04 07:29:21.823087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:48720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.830 [2024-11-04 07:29:21.823101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:07.830 [2024-11-04 07:29:21.823119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:48768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.830 [2024-11-04 07:29:21.823133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:07.830 [2024-11-04 07:29:21.823158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:48776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.830 [2024-11-04 07:29:21.823172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:07.830 [2024-11-04 07:29:21.823191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:48784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.830 [2024-11-04 07:29:21.823210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:07.830 [2024-11-04 07:29:21.823228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:48792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.830 [2024-11-04 07:29:21.823240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:07.830 [2024-11-04 07:29:21.823258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:48808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.830 [2024-11-04 07:29:21.823271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:07.830 [2024-11-04 07:29:21.823291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:48816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.830 [2024-11-04 07:29:21.823304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:07.830 [2024-11-04 07:29:21.823323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:48832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.830 [2024-11-04 07:29:21.823336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:07.830 [2024-11-04 07:29:21.823355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:48880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.830 [2024-11-04 07:29:21.823368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:07.830 [2024-11-04 07:29:21.823386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:48928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.830 [2024-11-04 07:29:21.823400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:07.830 [2024-11-04 07:29:21.823419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:48960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.830 [2024-11-04 07:29:21.823432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:07.830 [2024-11-04 07:29:21.823451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:48976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.830 [2024-11-04 07:29:21.823464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:07.830 [2024-11-04 07:29:21.823482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.830 [2024-11-04 07:29:21.823502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:07.830 [2024-11-04 07:29:21.823522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.830 [2024-11-04 07:29:21.823536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:07.830 [2024-11-04 07:29:21.823561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.830 [2024-11-04 07:29:21.823575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:07.830 [2024-11-04 07:29:21.823593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.830 [2024-11-04 07:29:21.823606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:07.830 [2024-11-04 07:29:21.823625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:49464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.831 [2024-11-04 07:29:21.823638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:07.831 [2024-11-04 07:29:21.824294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:49472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.831 [2024-11-04 07:29:21.824323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:07.831 [2024-11-04 07:29:21.824347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:49480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.831 [2024-11-04 07:29:21.824362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:07.831 [2024-11-04 07:29:21.824380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.831 [2024-11-04 07:29:21.824393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:07.831 [2024-11-04 07:29:21.824412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:49496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.831 [2024-11-04 07:29:21.824425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:07.831 [2024-11-04 07:29:21.824443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:49504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.831 [2024-11-04 07:29:21.824456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:07.831 [2024-11-04 07:29:21.824475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.831 [2024-11-04 07:29:21.824489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:07.831 [2024-11-04 07:29:21.824507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:49520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.831 [2024-11-04 07:29:21.824520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:07.831 [2024-11-04 07:29:21.824538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:49528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.831 [2024-11-04 07:29:21.824551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:07.831 [2024-11-04 07:29:21.824569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.831 [2024-11-04 07:29:21.824584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:07.831 [2024-11-04 07:29:21.824602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:49544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.831 [2024-11-04 07:29:21.824625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:07.831 [2024-11-04 07:29:21.824644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.831 [2024-11-04 07:29:21.824658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:07.831 [2024-11-04 07:29:21.824676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.831 [2024-11-04 07:29:21.824691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:07.831 [2024-11-04 07:29:21.824709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:49568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.831 [2024-11-04 07:29:21.824723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:07.831 [2024-11-04 07:29:21.824742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:49576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.831 [2024-11-04 07:29:21.824755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:07.831 [2024-11-04 07:29:21.824773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:49584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.831 [2024-11-04 07:29:21.824787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:07.831 [2024-11-04 07:29:21.824806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.831 [2024-11-04 07:29:21.824837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:07.831 [2024-11-04 07:29:21.824855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.831 [2024-11-04 07:29:21.824869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:07.831 [2024-11-04 07:29:21.824908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:49608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.831 [2024-11-04 07:29:21.824935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:07.831 [2024-11-04 07:29:21.824955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.831 [2024-11-04 07:29:21.824969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:07.831 [2024-11-04 07:29:21.824988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.831 [2024-11-04 07:29:21.825002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.831 [2024-11-04 07:29:21.825022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:49632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.831 [2024-11-04 07:29:21.825036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.831 [2024-11-04 07:29:21.825055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:49640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.831 [2024-11-04 07:29:21.825076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:07.831 [2024-11-04 07:29:21.825096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:49648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.831 [2024-11-04 07:29:21.825111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:07.831 [2024-11-04 07:29:21.825130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.831 [2024-11-04 07:29:21.825144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:07.831 [2024-11-04 07:29:21.825177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:49664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.831 [2024-11-04 07:29:21.825191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:07.831 [2024-11-04 07:29:21.825209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.831 [2024-11-04 07:29:21.825222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:07.831 [2024-11-04 07:29:21.825240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.831 [2024-11-04 07:29:21.825253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:07.831 [2024-11-04 07:29:21.825272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:49688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.831 [2024-11-04 07:29:21.825296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:07.831 [2024-11-04 07:29:21.825315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.831 [2024-11-04 07:29:21.825329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:07.831 [2024-11-04 07:29:21.825347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.831 [2024-11-04 07:29:21.825360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:07.831 [2024-11-04 07:29:21.825378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:49712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.831 [2024-11-04 07:29:21.825391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:07.831 [2024-11-04 07:29:21.825409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:49720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.831 [2024-11-04 07:29:21.825423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:07.831 [2024-11-04 07:29:21.825441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.831 [2024-11-04 07:29:21.825455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:07.831 [2024-11-04 07:29:21.825473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:49736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.831 [2024-11-04 07:29:21.825487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:07.831 [2024-11-04 07:29:21.825511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:49744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.831 [2024-11-04 07:29:21.825526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:07.831 [2024-11-04 07:29:21.825543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.831 [2024-11-04 07:29:21.825557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:07.831 [2024-11-04 07:29:21.825575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.831 [2024-11-04 07:29:21.825589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:07.831 [2024-11-04 07:29:21.825608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.831 [2024-11-04 07:29:21.825622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:07.831 [2024-11-04 07:29:21.825640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.832 [2024-11-04 07:29:21.825653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:07.832 [2024-11-04 07:29:21.825672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.832 [2024-11-04 07:29:21.825685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:07.832 [2024-11-04 07:29:21.825704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:49792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.832 [2024-11-04 07:29:21.825717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:07.832 [2024-11-04 07:29:28.391211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.832 [2024-11-04 07:29:28.391266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:07.832 [2024-11-04 07:29:28.391305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.832 [2024-11-04 07:29:28.391324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:07.832 [2024-11-04 07:29:28.391344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.832 [2024-11-04 07:29:28.391359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:07.832 [2024-11-04 07:29:28.391377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.832 [2024-11-04 07:29:28.391390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:07.832 [2024-11-04 07:29:28.391408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.832 [2024-11-04 07:29:28.391422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:07.832 [2024-11-04 07:29:28.391454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.832 [2024-11-04 07:29:28.391469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:07.832 [2024-11-04 07:29:28.391487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.832 [2024-11-04 07:29:28.391501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:07.832 [2024-11-04 07:29:28.391519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.832 [2024-11-04 07:29:28.391533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:07.832 [2024-11-04 07:29:28.391551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.832 [2024-11-04 07:29:28.391564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:07.832 [2024-11-04 07:29:28.391583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.832 [2024-11-04 07:29:28.391596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:07.832 [2024-11-04 07:29:28.391615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.832 [2024-11-04 07:29:28.391628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:07.832 [2024-11-04 07:29:28.391647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.832 [2024-11-04 07:29:28.391660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:07.832 [2024-11-04 07:29:28.391679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.832 [2024-11-04 07:29:28.391692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:07.832 [2024-11-04 07:29:28.391710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.832 [2024-11-04 07:29:28.391722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:07.832 [2024-11-04 07:29:28.391740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.832 [2024-11-04 07:29:28.391752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:07.832 [2024-11-04 07:29:28.391770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.832 [2024-11-04 07:29:28.391783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:07.832 [2024-11-04 07:29:28.391801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.832 [2024-11-04 07:29:28.391814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:07.832 [2024-11-04 07:29:28.391839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.832 [2024-11-04 07:29:28.391855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:07.832 [2024-11-04 07:29:28.392197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.832 [2024-11-04 07:29:28.392230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:07.832 [2024-11-04 07:29:28.392255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.832 [2024-11-04 07:29:28.392272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:07.832 [2024-11-04 07:29:28.392295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.832 [2024-11-04 07:29:28.392310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:07.832 [2024-11-04 07:29:28.392330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.832 [2024-11-04 07:29:28.392344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:07.832 [2024-11-04 07:29:28.392364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.832 [2024-11-04 07:29:28.392377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:07.832 [2024-11-04 07:29:28.392397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.832 [2024-11-04 07:29:28.392410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:07.832 [2024-11-04 07:29:28.392430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.832 [2024-11-04 07:29:28.392443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:07.832 [2024-11-04 07:29:28.392463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.832 [2024-11-04 07:29:28.392477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:07.832 [2024-11-04 07:29:28.392496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.832 [2024-11-04 07:29:28.392510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:07.832 [2024-11-04 07:29:28.392530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.832 [2024-11-04 07:29:28.392544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:07.832 [2024-11-04 07:29:28.392563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.832 [2024-11-04 07:29:28.392577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:07.832 [2024-11-04 07:29:28.392597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.832 [2024-11-04 07:29:28.392620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:07.832 [2024-11-04 07:29:28.392641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.832 [2024-11-04 07:29:28.392655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:07.833 [2024-11-04 07:29:28.392677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.833 [2024-11-04 07:29:28.392693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:07.833 [2024-11-04 07:29:28.392713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.833 [2024-11-04 07:29:28.392727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:07.833 [2024-11-04 07:29:28.392747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.833 [2024-11-04 07:29:28.392760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:07.833 [2024-11-04 07:29:28.392780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.833 [2024-11-04 07:29:28.392793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:07.833 [2024-11-04 07:29:28.392813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.833 [2024-11-04 07:29:28.392827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:07.833 [2024-11-04 07:29:28.392847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.833 [2024-11-04 07:29:28.392859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:07.833 [2024-11-04 07:29:28.392905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.833 [2024-11-04 07:29:28.392922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.833 [2024-11-04 07:29:28.392942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.833 [2024-11-04 07:29:28.392955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.833 [2024-11-04 07:29:28.392974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.833 [2024-11-04 07:29:28.392992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:07.833 [2024-11-04 07:29:28.393011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.833 [2024-11-04 07:29:28.393025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:07.833 [2024-11-04 07:29:28.393044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.833 [2024-11-04 07:29:28.393066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:07.833 [2024-11-04 07:29:28.393089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.833 [2024-11-04 07:29:28.393103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:07.833 [2024-11-04 07:29:28.393194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.833 [2024-11-04 07:29:28.393216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:07.833 [2024-11-04 07:29:28.393240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.833 [2024-11-04 07:29:28.393256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:07.833 [2024-11-04 07:29:28.393278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.833 [2024-11-04 07:29:28.393291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:07.833 [2024-11-04 07:29:28.393319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.833 [2024-11-04 07:29:28.393332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:07.833 [2024-11-04 07:29:28.393355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.833 [2024-11-04 07:29:28.393368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:07.833 [2024-11-04 07:29:28.393389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.833 [2024-11-04 07:29:28.393402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:07.833 [2024-11-04 07:29:28.393436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.833 [2024-11-04 07:29:28.393450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:07.833 [2024-11-04 07:29:28.393472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.833 [2024-11-04 07:29:28.393485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:07.833 [2024-11-04 07:29:28.393507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.833 [2024-11-04 07:29:28.393521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:07.833 [2024-11-04 07:29:28.393542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.833 [2024-11-04 07:29:28.393555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:07.833 [2024-11-04 07:29:28.393576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.833 [2024-11-04 07:29:28.393589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:07.833 [2024-11-04 07:29:28.393625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.833 [2024-11-04 07:29:28.393639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:07.833 [2024-11-04 07:29:28.393660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.833 [2024-11-04 07:29:28.393674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:07.833 [2024-11-04 07:29:28.393695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.833 [2024-11-04 07:29:28.393708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:07.833 [2024-11-04 07:29:28.393728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.833 [2024-11-04 07:29:28.393743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:07.833 [2024-11-04 07:29:28.393765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.833 [2024-11-04 07:29:28.393778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:07.833 [2024-11-04 07:29:28.393799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.833 [2024-11-04 07:29:28.393812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:07.833 [2024-11-04 07:29:28.393833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.833 [2024-11-04 07:29:28.393846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:07.833 [2024-11-04 07:29:28.393867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.833 [2024-11-04 07:29:28.393902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:07.833 [2024-11-04 07:29:28.393925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.833 [2024-11-04 07:29:28.393939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:07.833 [2024-11-04 07:29:28.393960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.833 [2024-11-04 07:29:28.393974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:07.833 [2024-11-04 07:29:28.393995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.833 [2024-11-04 07:29:28.394008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:07.833 [2024-11-04 07:29:28.394030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.833 [2024-11-04 07:29:28.394045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:07.833 [2024-11-04 07:29:28.394074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.833 [2024-11-04 07:29:28.394088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:07.833 [2024-11-04 07:29:28.394110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.833 [2024-11-04 07:29:28.394123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:07.833 [2024-11-04 07:29:28.394146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.834 [2024-11-04 07:29:28.394159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:07.834 [2024-11-04 07:29:28.394181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.834 [2024-11-04 07:29:28.394194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:07.834 [2024-11-04 07:29:28.394215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.834 [2024-11-04 07:29:28.394228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:07.834 [2024-11-04 07:29:28.394250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.834 [2024-11-04 07:29:28.394263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:07.834 [2024-11-04 07:29:28.394284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.834 [2024-11-04 07:29:28.394297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:07.834 [2024-11-04 07:29:28.394318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.834 [2024-11-04 07:29:28.394331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:07.834 [2024-11-04 07:29:28.394352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.834 [2024-11-04 07:29:28.394366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:07.834 [2024-11-04 07:29:28.394387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.834 [2024-11-04 07:29:28.394400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:07.834 [2024-11-04 07:29:28.394421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.834 [2024-11-04 07:29:28.394435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:07.834 [2024-11-04 07:29:28.394456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.834 [2024-11-04 07:29:28.394470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:07.834 [2024-11-04 07:29:28.394492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.834 [2024-11-04 07:29:28.394511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:07.834 [2024-11-04 07:29:28.394535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.834 [2024-11-04 07:29:28.394548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:07.834 [2024-11-04 07:29:28.394569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.834 [2024-11-04 07:29:28.394582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:07.834 [2024-11-04 07:29:28.394616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.834 [2024-11-04 07:29:28.394631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:07.834 [2024-11-04 07:29:28.394651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.834 [2024-11-04 07:29:28.394664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:07.834 [2024-11-04 07:29:28.394686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.834 [2024-11-04 07:29:28.394699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:07.834 [2024-11-04 07:29:28.394720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.834 [2024-11-04 07:29:28.394733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:07.834 [2024-11-04 07:29:28.394754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.834 [2024-11-04 07:29:28.394768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:07.834 [2024-11-04 07:29:28.394789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.834 [2024-11-04 07:29:28.394803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:07.834 [2024-11-04 07:29:28.394824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.834 [2024-11-04 07:29:28.394838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:07.834 [2024-11-04 07:29:28.394968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.834 [2024-11-04 07:29:28.394991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:07.834 [2024-11-04 07:29:28.395019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.834 [2024-11-04 07:29:28.395034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:07.834 [2024-11-04 07:29:28.395066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.834 [2024-11-04 07:29:28.395089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:07.834 [2024-11-04 07:29:28.395115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.834 [2024-11-04 07:29:28.395129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:07.834 [2024-11-04 07:29:28.395152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.834 [2024-11-04 07:29:28.395165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:07.834 [2024-11-04 07:29:28.395189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.834 [2024-11-04 07:29:28.395203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:07.834 [2024-11-04 07:29:28.395235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.834 [2024-11-04 07:29:28.395248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:07.834 [2024-11-04 07:29:28.395272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.834 [2024-11-04 07:29:28.395285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:07.834 [2024-11-04 07:29:28.395309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.834 [2024-11-04 07:29:28.395322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:07.834 [2024-11-04 07:29:28.395348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.834 [2024-11-04 07:29:28.395362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:07.834 [2024-11-04 07:29:28.395385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.834 [2024-11-04 07:29:28.395399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:07.834 [2024-11-04 07:29:28.395423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.834 [2024-11-04 07:29:28.395436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:07.834 [2024-11-04 07:29:28.395460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.834 [2024-11-04 07:29:28.395473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:07.834 [2024-11-04 07:29:28.395497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.834 [2024-11-04 07:29:28.395510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:07.834 [2024-11-04 07:29:28.395536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.834 [2024-11-04 07:29:28.395549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:07.834 [2024-11-04 07:29:28.395579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.834 [2024-11-04 07:29:28.395593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:07.834 [2024-11-04 07:29:28.395617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.834 [2024-11-04 07:29:28.395643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:07.834 [2024-11-04 07:29:28.395668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.834 [2024-11-04 07:29:28.395682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:07.834 [2024-11-04 07:29:28.395706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.834 [2024-11-04 07:29:28.395725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:07.834 [2024-11-04 07:29:28.395749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.834 [2024-11-04 07:29:28.395762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:07.835 [2024-11-04 07:29:28.395786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.835 [2024-11-04 07:29:28.395799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:07.835 [2024-11-04 07:29:28.395824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.835 [2024-11-04 07:29:28.395838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:07.835 [2024-11-04 07:29:28.395862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.835 [2024-11-04 07:29:28.395898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:07.835 [2024-11-04 07:29:28.395925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.835 [2024-11-04 07:29:28.395939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:07.835 [2024-11-04 07:29:28.395965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.835 [2024-11-04 07:29:28.395979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:07.835 [2024-11-04 07:29:28.396003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.835 [2024-11-04 07:29:28.396017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:07.835 [2024-11-04 07:29:28.396043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.835 [2024-11-04 07:29:28.396061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:07.835 [2024-11-04 07:29:35.390841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:48008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.835 [2024-11-04 07:29:35.390976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:07.835 [2024-11-04 07:29:35.391028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:48016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.835 [2024-11-04 07:29:35.391047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:07.835 [2024-11-04 07:29:35.391067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:48024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.835 [2024-11-04 07:29:35.391081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:07.835 [2024-11-04 07:29:35.391099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:48032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.835 [2024-11-04 07:29:35.391114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:07.835 [2024-11-04 07:29:35.391132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:48040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.835 [2024-11-04 07:29:35.391145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:07.835 [2024-11-04 07:29:35.391163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:48048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.835 [2024-11-04 07:29:35.391177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:07.835 [2024-11-04 07:29:35.391195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:48056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.835 [2024-11-04 07:29:35.391208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:07.835 [2024-11-04 07:29:35.391226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:48064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.835 [2024-11-04 07:29:35.391239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:07.835 [2024-11-04 07:29:35.391257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:48072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.835 [2024-11-04 07:29:35.391271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:07.835 [2024-11-04 07:29:35.391289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:48080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.835 [2024-11-04 07:29:35.391302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:07.835 [2024-11-04 07:29:35.391320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:48088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.835 [2024-11-04 07:29:35.391334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:07.835 [2024-11-04 07:29:35.391352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.835 [2024-11-04 07:29:35.391365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:07.835 [2024-11-04 07:29:35.391384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:48104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.835 [2024-11-04 07:29:35.391419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:07.835 [2024-11-04 07:29:35.391438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:48112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.835 [2024-11-04 07:29:35.391451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:07.835 [2024-11-04 07:29:35.391470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:48120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.835 [2024-11-04 07:29:35.391484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:07.835 [2024-11-04 07:29:35.391502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:48128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.835 [2024-11-04 07:29:35.391514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:07.835 [2024-11-04 07:29:35.391532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:48136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.835 [2024-11-04 07:29:35.391544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:07.835 [2024-11-04 07:29:35.391562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:48144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.835 [2024-11-04 07:29:35.391575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:07.835 [2024-11-04 07:29:35.391592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:48152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.835 [2024-11-04 07:29:35.391605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:07.835 [2024-11-04 07:29:35.391623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:48160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.835 [2024-11-04 07:29:35.391636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:07.835 [2024-11-04 07:29:35.392353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:48168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.835 [2024-11-04 07:29:35.392379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:07.835 [2024-11-04 07:29:35.392405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:48176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.835 [2024-11-04 07:29:35.392421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:07.835 [2024-11-04 07:29:35.392442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:48184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.835 [2024-11-04 07:29:35.392455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:07.835 [2024-11-04 07:29:35.392477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:48192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.835 [2024-11-04 07:29:35.392490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:07.835 [2024-11-04 07:29:35.392512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:48200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.835 [2024-11-04 07:29:35.392536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:07.835 [2024-11-04 07:29:35.392559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:48208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.835 [2024-11-04 07:29:35.392573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:07.835 [2024-11-04 07:29:35.392595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:48216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.835 [2024-11-04 07:29:35.392608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:07.835 [2024-11-04 07:29:35.392629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:48224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.835 [2024-11-04 07:29:35.392643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:07.835 [2024-11-04 07:29:35.392665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:48232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.835 [2024-11-04 07:29:35.392678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:07.835 [2024-11-04 07:29:35.392699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:48240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.835 [2024-11-04 07:29:35.392712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:07.835 [2024-11-04 07:29:35.392734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:48248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.835 [2024-11-04 07:29:35.392748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:07.835 [2024-11-04 07:29:35.392769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:48256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.836 [2024-11-04 07:29:35.392783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:07.836 [2024-11-04 07:29:35.392805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:48264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.836 [2024-11-04 07:29:35.392819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:07.836 [2024-11-04 07:29:35.392840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:48272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.836 [2024-11-04 07:29:35.392854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:07.836 [2024-11-04 07:29:35.392889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:48280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.836 [2024-11-04 07:29:35.392907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:07.836 [2024-11-04 07:29:35.392930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:48288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.836 [2024-11-04 07:29:35.392943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:07.836 [2024-11-04 07:29:35.392965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:48296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.836 [2024-11-04 07:29:35.392978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:07.836 [2024-11-04 07:29:35.393008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:48304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.836 [2024-11-04 07:29:35.393022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:07.836 [2024-11-04 07:29:35.393044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:48312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.836 [2024-11-04 07:29:35.393057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:07.836 [2024-11-04 07:29:35.393078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:48320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.836 [2024-11-04 07:29:35.393092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:07.836 [2024-11-04 07:29:35.393114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:48328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.836 [2024-11-04 07:29:35.393128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:07.836 [2024-11-04 07:29:35.393227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:48336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.836 [2024-11-04 07:29:35.393249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:07.836 [2024-11-04 07:29:35.393276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:48344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.836 [2024-11-04 07:29:35.393291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:07.836 [2024-11-04 07:29:35.393316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:48352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.836 [2024-11-04 07:29:35.393330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:07.836 [2024-11-04 07:29:35.393352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:48360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.836 [2024-11-04 07:29:35.393368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:07.836 [2024-11-04 07:29:35.393392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:48368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.836 [2024-11-04 07:29:35.393405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:07.836 [2024-11-04 07:29:35.393428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:48376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.836 [2024-11-04 07:29:35.393441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:07.836 [2024-11-04 07:29:35.393464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:48384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.836 [2024-11-04 07:29:35.393477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:07.836 [2024-11-04 07:29:35.393500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:48392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.836 [2024-11-04 07:29:35.393513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:07.836 [2024-11-04 07:29:35.393546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:48400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.836 [2024-11-04 07:29:35.393561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:07.836 [2024-11-04 07:29:35.393584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:48408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.836 [2024-11-04 07:29:35.393598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:07.836 [2024-11-04 07:29:35.393620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:48416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.836 [2024-11-04 07:29:35.393634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:07.836 [2024-11-04 07:29:35.393656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:48424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.836 [2024-11-04 07:29:35.393670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:07.836 [2024-11-04 07:29:35.393692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:47600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.836 [2024-11-04 07:29:35.393706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:07.836 [2024-11-04 07:29:35.393728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:47608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.836 [2024-11-04 07:29:35.393742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:07.836 [2024-11-04 07:29:35.393764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:47624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.836 [2024-11-04 07:29:35.393778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:07.836 [2024-11-04 07:29:35.393802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:47640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.836 [2024-11-04 07:29:35.393816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:07.836 [2024-11-04 07:29:35.393839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:47656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.836 [2024-11-04 07:29:35.393853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:07.836 [2024-11-04 07:29:35.393889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:47664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.836 [2024-11-04 07:29:35.393908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.836 [2024-11-04 07:29:35.393932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:47704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.836 [2024-11-04 07:29:35.393946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.836 [2024-11-04 07:29:35.393969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:47712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.836 [2024-11-04 07:29:35.393982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:07.836 [2024-11-04 07:29:35.394014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:47720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.836 [2024-11-04 07:29:35.394029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:07.836 [2024-11-04 07:29:35.394052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:47728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.836 [2024-11-04 07:29:35.394066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:07.836 [2024-11-04 07:29:35.394088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:47752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.836 [2024-11-04 07:29:35.394102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:07.836 [2024-11-04 07:29:35.394124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:47760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.836 [2024-11-04 07:29:35.394138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:07.836 [2024-11-04 07:29:35.394161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:47776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.836 [2024-11-04 07:29:35.394175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:07.836 [2024-11-04 07:29:35.394197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:47800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.836 [2024-11-04 07:29:35.394211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:07.836 [2024-11-04 07:29:35.394233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:47808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.836 [2024-11-04 07:29:35.394247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:07.836 [2024-11-04 07:29:35.394277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:47824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.836 [2024-11-04 07:29:35.394291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:07.836 [2024-11-04 07:29:35.394313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:48432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.836 [2024-11-04 07:29:35.394328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:07.836 [2024-11-04 07:29:35.394351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:48440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.837 [2024-11-04 07:29:35.394364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:07.837 [2024-11-04 07:29:35.394387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:48448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.837 [2024-11-04 07:29:35.394401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:07.837 [2024-11-04 07:29:35.394424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:48456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.837 [2024-11-04 07:29:35.394438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:07.837 [2024-11-04 07:29:35.394469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:48464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.837 [2024-11-04 07:29:35.394491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:07.837 [2024-11-04 07:29:35.394515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:48472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.837 [2024-11-04 07:29:35.394529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:07.837 [2024-11-04 07:29:35.394551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:48480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.837 [2024-11-04 07:29:35.394564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:07.837 [2024-11-04 07:29:35.394595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:47864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.837 [2024-11-04 07:29:35.394619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:07.837 [2024-11-04 07:29:35.394642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:47896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.837 [2024-11-04 07:29:35.394655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:07.837 [2024-11-04 07:29:35.394678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:47904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.837 [2024-11-04 07:29:35.394691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:07.837 [2024-11-04 07:29:35.394713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:47928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.837 [2024-11-04 07:29:35.394727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:07.837 [2024-11-04 07:29:35.394749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:47936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.837 [2024-11-04 07:29:35.394764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:07.837 [2024-11-04 07:29:35.394787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:47952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.837 [2024-11-04 07:29:35.394800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:07.837 [2024-11-04 07:29:35.394823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:47976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.837 [2024-11-04 07:29:35.394836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:07.837 [2024-11-04 07:29:35.394858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:47984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.837 [2024-11-04 07:29:35.394884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:07.837 [2024-11-04 07:29:35.394910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:48488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.837 [2024-11-04 07:29:35.394923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:07.837 [2024-11-04 07:29:35.394946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:48496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.837 [2024-11-04 07:29:35.394975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:07.837 [2024-11-04 07:29:35.395000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:48504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.837 [2024-11-04 07:29:35.395024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:07.837 [2024-11-04 07:29:35.395048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:48512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.837 [2024-11-04 07:29:35.395062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:07.837 [2024-11-04 07:29:35.395084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:48520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.837 [2024-11-04 07:29:35.395098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:07.837 [2024-11-04 07:29:35.395127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:48528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.837 [2024-11-04 07:29:35.395141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:07.837 [2024-11-04 07:29:35.395164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:48536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.837 [2024-11-04 07:29:35.395178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:07.837 [2024-11-04 07:29:35.395200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.837 [2024-11-04 07:29:35.395216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:07.837 [2024-11-04 07:29:35.395239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:48552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.837 [2024-11-04 07:29:35.395252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:07.837 [2024-11-04 07:29:35.395275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:48560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.837 [2024-11-04 07:29:35.395288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:07.837 [2024-11-04 07:29:48.745802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:111392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.837 [2024-11-04 07:29:48.745863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.837 [2024-11-04 07:29:48.745898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:111408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.837 [2024-11-04 07:29:48.745921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.837 [2024-11-04 07:29:48.745935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:111416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.837 [2024-11-04 07:29:48.745962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.837 [2024-11-04 07:29:48.745976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.837 [2024-11-04 07:29:48.745989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.837 [2024-11-04 07:29:48.746026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:110936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.837 [2024-11-04 07:29:48.746040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.837 [2024-11-04 07:29:48.746052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.837 [2024-11-04 07:29:48.746064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.837 [2024-11-04 07:29:48.746076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.837 [2024-11-04 07:29:48.746087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.837 [2024-11-04 07:29:48.746100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:111000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.837 [2024-11-04 07:29:48.746112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.837 [2024-11-04 07:29:48.746124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:111056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.837 [2024-11-04 07:29:48.746135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.837 [2024-11-04 07:29:48.746148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:111064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.837 [2024-11-04 07:29:48.746159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.837 [2024-11-04 07:29:48.746172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:111072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.837 [2024-11-04 07:29:48.746183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.746195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:111424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.838 [2024-11-04 07:29:48.746206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.746218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:111432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.838 [2024-11-04 07:29:48.746230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.746242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:111440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.838 [2024-11-04 07:29:48.746253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.746265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:111448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.838 [2024-11-04 07:29:48.746276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.746289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:111464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.838 [2024-11-04 07:29:48.746300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.746312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:111472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.838 [2024-11-04 07:29:48.746342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.746357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:111480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.838 [2024-11-04 07:29:48.746379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.746392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:111488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.838 [2024-11-04 07:29:48.746403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.746416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:111496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.838 [2024-11-04 07:29:48.746427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.746439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:111512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.838 [2024-11-04 07:29:48.746451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.746463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:111528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.838 [2024-11-04 07:29:48.746475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.746487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:111536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.838 [2024-11-04 07:29:48.746499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.746511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.838 [2024-11-04 07:29:48.746523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.746535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:111584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.838 [2024-11-04 07:29:48.746547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.746559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:111592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.838 [2024-11-04 07:29:48.746570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.746583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:111600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.838 [2024-11-04 07:29:48.746633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.746656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:111608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.838 [2024-11-04 07:29:48.746668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.746681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:111616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.838 [2024-11-04 07:29:48.746694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.746714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:111080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.838 [2024-11-04 07:29:48.746727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.746742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:111096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.838 [2024-11-04 07:29:48.746754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.746768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:111104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.838 [2024-11-04 07:29:48.746780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.746793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:111120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.838 [2024-11-04 07:29:48.746807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.746820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:111128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.838 [2024-11-04 07:29:48.746832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.746846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:111176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.838 [2024-11-04 07:29:48.746857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.746871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:111192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.838 [2024-11-04 07:29:48.746882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.746896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:111248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.838 [2024-11-04 07:29:48.746922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.746951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:111640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.838 [2024-11-04 07:29:48.746964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.746976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:111656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.838 [2024-11-04 07:29:48.746988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.747001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:111672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.838 [2024-11-04 07:29:48.747012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.747025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:111688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.838 [2024-11-04 07:29:48.747037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.747049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.838 [2024-11-04 07:29:48.747078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.747092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:111712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.838 [2024-11-04 07:29:48.747104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.747116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:111256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.838 [2024-11-04 07:29:48.747128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.747141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:111264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.838 [2024-11-04 07:29:48.747152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.747165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:111280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.838 [2024-11-04 07:29:48.747176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.747195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:111304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.838 [2024-11-04 07:29:48.747207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.747222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:111312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.838 [2024-11-04 07:29:48.747246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.747272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:111320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.838 [2024-11-04 07:29:48.747284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.747297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.838 [2024-11-04 07:29:48.747310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.838 [2024-11-04 07:29:48.747323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:111352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.839 [2024-11-04 07:29:48.747334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.839 [2024-11-04 07:29:48.747347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.839 [2024-11-04 07:29:48.747359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.839 [2024-11-04 07:29:48.747372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:111728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.839 [2024-11-04 07:29:48.747383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.839 [2024-11-04 07:29:48.747395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.839 [2024-11-04 07:29:48.747407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.839 [2024-11-04 07:29:48.747420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:111744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.839 [2024-11-04 07:29:48.747437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.839 [2024-11-04 07:29:48.747451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:111752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.839 [2024-11-04 07:29:48.747463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.839 [2024-11-04 07:29:48.747475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:111760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.839 [2024-11-04 07:29:48.747487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.839 [2024-11-04 07:29:48.747500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:111768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.839 [2024-11-04 07:29:48.747511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.839 [2024-11-04 07:29:48.747524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:111776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.839 [2024-11-04 07:29:48.747552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.839 [2024-11-04 07:29:48.747566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:111784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.839 [2024-11-04 07:29:48.747578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.839 [2024-11-04 07:29:48.747592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:111792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.839 [2024-11-04 07:29:48.747604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.839 [2024-11-04 07:29:48.747617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:111800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.839 [2024-11-04 07:29:48.747629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.839 [2024-11-04 07:29:48.747643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:111808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.839 [2024-11-04 07:29:48.747655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.839 [2024-11-04 07:29:48.747669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.839 [2024-11-04 07:29:48.747681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.839 [2024-11-04 07:29:48.747694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:111824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.839 [2024-11-04 07:29:48.747706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.839 [2024-11-04 07:29:48.747720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:111832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.839 [2024-11-04 07:29:48.747731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.839 [2024-11-04 07:29:48.747744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:111840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.839 [2024-11-04 07:29:48.747756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.839 [2024-11-04 07:29:48.747776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:111848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.839 [2024-11-04 07:29:48.747789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.839 [2024-11-04 07:29:48.747803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:111856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.839 [2024-11-04 07:29:48.747815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.839 [2024-11-04 07:29:48.747829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.839 [2024-11-04 07:29:48.747841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.839 [2024-11-04 07:29:48.747870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:111872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.839 [2024-11-04 07:29:48.747895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.839 [2024-11-04 07:29:48.747932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:111880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.839 [2024-11-04 07:29:48.747953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.839 [2024-11-04 07:29:48.747966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:111888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.839 [2024-11-04 07:29:48.747977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.839 [2024-11-04 07:29:48.747990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.839 [2024-11-04 07:29:48.748002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.839 [2024-11-04 07:29:48.748016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:111904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.839 [2024-11-04 07:29:48.748028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.839 [2024-11-04 07:29:48.748041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:111912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.839 [2024-11-04 07:29:48.748054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.839 [2024-11-04 07:29:48.748068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:111920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.839 [2024-11-04 07:29:48.748079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.839 [2024-11-04 07:29:48.748092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.839 [2024-11-04 07:29:48.748104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.839 [2024-11-04 07:29:48.748117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:111936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.839 [2024-11-04 07:29:48.748129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.839 [2024-11-04 07:29:48.748142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:111944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.839 [2024-11-04 07:29:48.748160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.839 [2024-11-04 07:29:48.748174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:111952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.839 [2024-11-04 07:29:48.748186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.839 [2024-11-04 07:29:48.748199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:111376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.839 [2024-11-04 07:29:48.748211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.839 [2024-11-04 07:29:48.748223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:111384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.839 [2024-11-04 07:29:48.748235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.839 [2024-11-04 07:29:48.748247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:111400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.839 [2024-11-04 07:29:48.748259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.839 [2024-11-04 07:29:48.748273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:111456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.839 [2024-11-04 07:29:48.748295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.839 [2024-11-04 07:29:48.748307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:111504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.839 [2024-11-04 07:29:48.748319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.839 [2024-11-04 07:29:48.748351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:111520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.839 [2024-11-04 07:29:48.748364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.839 [2024-11-04 07:29:48.748377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:111544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.839 [2024-11-04 07:29:48.748388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.839 [2024-11-04 07:29:48.748401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:111560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.839 [2024-11-04 07:29:48.748413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.839 [2024-11-04 07:29:48.748426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:111960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.840 [2024-11-04 07:29:48.748444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.840 [2024-11-04 07:29:48.748457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:111968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.840 [2024-11-04 07:29:48.748469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.840 [2024-11-04 07:29:48.748482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:111976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.840 [2024-11-04 07:29:48.748494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.840 [2024-11-04 07:29:48.748517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:111984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.840 [2024-11-04 07:29:48.748530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.840 [2024-11-04 07:29:48.748543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:111992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.840 [2024-11-04 07:29:48.748554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.840 [2024-11-04 07:29:48.748567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:112000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.840 [2024-11-04 07:29:48.748579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.840 [2024-11-04 07:29:48.748591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:112008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.840 [2024-11-04 07:29:48.748603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.840 [2024-11-04 07:29:48.748616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:112016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.840 [2024-11-04 07:29:48.748628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.840 [2024-11-04 07:29:48.748641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.840 [2024-11-04 07:29:48.748652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.840 [2024-11-04 07:29:48.748666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:112032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.840 [2024-11-04 07:29:48.748678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.840 [2024-11-04 07:29:48.748691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.840 [2024-11-04 07:29:48.748703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.840 [2024-11-04 07:29:48.748716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:112048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.840 [2024-11-04 07:29:48.748727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.840 [2024-11-04 07:29:48.748740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:112056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.840 [2024-11-04 07:29:48.748751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.840 [2024-11-04 07:29:48.748771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:112064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.840 [2024-11-04 07:29:48.748783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.840 [2024-11-04 07:29:48.748796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:112072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.840 [2024-11-04 07:29:48.748808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.840 [2024-11-04 07:29:48.748821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:112080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.840 [2024-11-04 07:29:48.748839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.840 [2024-11-04 07:29:48.748853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:112088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.840 [2024-11-04 07:29:48.748893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.840 [2024-11-04 07:29:48.748922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:112096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.840 [2024-11-04 07:29:48.748934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.840 [2024-11-04 07:29:48.748947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:112104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.840 [2024-11-04 07:29:48.748959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.840 [2024-11-04 07:29:48.748972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:112112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.840 [2024-11-04 07:29:48.748983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.840 [2024-11-04 07:29:48.748996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:112120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.840 [2024-11-04 07:29:48.749008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.840 [2024-11-04 07:29:48.749021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:112128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.840 [2024-11-04 07:29:48.749033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.840 [2024-11-04 07:29:48.749045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:112136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.840 [2024-11-04 07:29:48.749057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.840 [2024-11-04 07:29:48.749070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.840 [2024-11-04 07:29:48.749082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.840 [2024-11-04 07:29:48.749094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.840 [2024-11-04 07:29:48.749106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.840 [2024-11-04 07:29:48.749118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:112160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.840 [2024-11-04 07:29:48.749130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.840 [2024-11-04 07:29:48.749143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:112168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.840 [2024-11-04 07:29:48.749154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.840 [2024-11-04 07:29:48.749167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:112176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.840 [2024-11-04 07:29:48.749178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.840 [2024-11-04 07:29:48.749197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:112184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.840 [2024-11-04 07:29:48.749209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.840 [2024-11-04 07:29:48.749228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:112192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.840 [2024-11-04 07:29:48.749240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.840 [2024-11-04 07:29:48.749253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:112200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.840 [2024-11-04 07:29:48.749277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.840 [2024-11-04 07:29:48.749298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:111568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.840 [2024-11-04 07:29:48.749309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.840 [2024-11-04 07:29:48.749322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:111576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.840 [2024-11-04 07:29:48.749339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.840 [2024-11-04 07:29:48.749353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.840 [2024-11-04 07:29:48.749364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.840 [2024-11-04 07:29:48.749378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:111632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.840 [2024-11-04 07:29:48.749390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.840 [2024-11-04 07:29:48.749403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:111648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.840 [2024-11-04 07:29:48.749415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.840 [2024-11-04 07:29:48.749427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:111664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.840 [2024-11-04 07:29:48.749439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.840 [2024-11-04 07:29:48.749452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:111680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.840 [2024-11-04 07:29:48.749464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.840 [2024-11-04 07:29:48.749476] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f6060 is same with the state(5) to be set 00:24:07.840 [2024-11-04 07:29:48.749491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:07.840 [2024-11-04 07:29:48.749501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:07.840 [2024-11-04 07:29:48.749511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111704 len:8 PRP1 0x0 PRP2 0x0 00:24:07.840 [2024-11-04 07:29:48.749522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.841 [2024-11-04 07:29:48.749585] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15f6060 was disconnected and freed. reset controller. 00:24:07.841 [2024-11-04 07:29:48.750730] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.841 [2024-11-04 07:29:48.750814] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1607a00 (9): Bad file descriptor 00:24:07.841 [2024-11-04 07:29:48.750971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.841 [2024-11-04 07:29:48.751024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.841 [2024-11-04 07:29:48.751054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1607a00 with addr=10.0.0.2, port=4421 00:24:07.841 [2024-11-04 07:29:48.751068] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1607a00 is same with the state(5) to be set 00:24:07.841 [2024-11-04 07:29:48.751090] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1607a00 (9): Bad file descriptor 00:24:07.841 [2024-11-04 07:29:48.751112] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.841 [2024-11-04 07:29:48.751128] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.841 [2024-11-04 07:29:48.751142] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.841 [2024-11-04 07:29:48.751164] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.841 [2024-11-04 07:29:48.751178] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.841 [2024-11-04 07:29:58.797118] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:07.841 Received shutdown signal, test time was about 55.288514 seconds 00:24:07.841 00:24:07.841 Latency(us) 00:24:07.841 [2024-11-04T07:30:09.682Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.841 [2024-11-04T07:30:09.682Z] Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:07.841 Verification LBA range: start 0x0 length 0x4000 00:24:07.841 Nvme0n1 : 55.29 12097.49 47.26 0.00 0.00 10565.74 1042.62 7015926.69 00:24:07.841 [2024-11-04T07:30:09.682Z] =================================================================================================================== 00:24:07.841 [2024-11-04T07:30:09.682Z] Total : 12097.49 47.26 0.00 0.00 10565.74 1042.62 7015926.69 00:24:07.841 07:30:09 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:07.841 07:30:09 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:24:07.841 07:30:09 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:07.841 07:30:09 -- host/multipath.sh@125 -- # nvmftestfini 00:24:07.841 07:30:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:07.841 07:30:09 -- nvmf/common.sh@116 -- # sync 00:24:07.841 07:30:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:07.841 07:30:09 -- nvmf/common.sh@119 -- # set +e 00:24:07.841 07:30:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:07.841 07:30:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:07.841 rmmod nvme_tcp 00:24:07.841 rmmod nvme_fabrics 00:24:07.841 rmmod nvme_keyring 00:24:07.841 07:30:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:07.841 07:30:09 -- nvmf/common.sh@123 -- # set -e 00:24:07.841 07:30:09 -- nvmf/common.sh@124 -- # return 0 00:24:07.841 07:30:09 -- nvmf/common.sh@477 -- # '[' -n 98576 ']' 00:24:07.841 07:30:09 -- nvmf/common.sh@478 -- # killprocess 98576 00:24:07.841 07:30:09 -- common/autotest_common.sh@926 -- # '[' -z 98576 ']' 00:24:07.841 07:30:09 -- common/autotest_common.sh@930 -- # kill -0 98576 00:24:07.841 07:30:09 -- common/autotest_common.sh@931 -- # uname 00:24:07.841 07:30:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:07.841 07:30:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 98576 00:24:07.841 07:30:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:07.841 07:30:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:07.841 killing process with pid 98576 00:24:07.841 07:30:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 98576' 00:24:07.841 07:30:09 -- common/autotest_common.sh@945 -- # kill 98576 00:24:07.841 07:30:09 -- common/autotest_common.sh@950 -- # wait 98576 00:24:07.841 07:30:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:07.841 07:30:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:07.841 07:30:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:07.841 07:30:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:07.841 07:30:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:07.841 07:30:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.841 07:30:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:07.841 07:30:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.099 07:30:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:08.099 00:24:08.099 real 1m1.256s 00:24:08.099 user 2m52.798s 00:24:08.099 sys 0m13.742s 00:24:08.099 07:30:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:08.099 ************************************ 00:24:08.099 07:30:09 -- common/autotest_common.sh@10 -- # set +x 00:24:08.099 END TEST nvmf_multipath 00:24:08.099 ************************************ 00:24:08.099 07:30:09 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:08.099 07:30:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:08.099 07:30:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:08.099 07:30:09 -- common/autotest_common.sh@10 -- # set +x 00:24:08.099 ************************************ 00:24:08.100 START TEST nvmf_timeout 00:24:08.100 ************************************ 00:24:08.100 07:30:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:08.100 * Looking for test storage... 00:24:08.100 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:08.100 07:30:09 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:08.100 07:30:09 -- nvmf/common.sh@7 -- # uname -s 00:24:08.100 07:30:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.100 07:30:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.100 07:30:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.100 07:30:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.100 07:30:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.100 07:30:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.100 07:30:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.100 07:30:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.100 07:30:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.100 07:30:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.100 07:30:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:24:08.100 07:30:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:24:08.100 07:30:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.100 07:30:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.100 07:30:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:08.100 07:30:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:08.100 07:30:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.100 07:30:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.100 07:30:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.100 07:30:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.100 07:30:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.100 07:30:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.100 07:30:09 -- paths/export.sh@5 -- # export PATH 00:24:08.100 07:30:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.100 07:30:09 -- nvmf/common.sh@46 -- # : 0 00:24:08.100 07:30:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:08.100 07:30:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:08.100 07:30:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:08.100 07:30:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.100 07:30:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.100 07:30:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:08.100 07:30:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:08.100 07:30:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:08.100 07:30:09 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:08.100 07:30:09 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:08.100 07:30:09 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:08.100 07:30:09 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:08.100 07:30:09 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:08.100 07:30:09 -- host/timeout.sh@19 -- # nvmftestinit 00:24:08.100 07:30:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:08.100 07:30:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:08.100 07:30:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:08.100 07:30:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:08.100 07:30:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:08.100 07:30:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.100 07:30:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:08.100 07:30:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.100 07:30:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:08.100 07:30:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:08.100 07:30:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:08.100 07:30:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:08.100 07:30:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:08.100 07:30:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:08.100 07:30:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:08.100 07:30:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:08.100 07:30:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:08.100 07:30:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:08.100 07:30:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:08.100 07:30:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:08.100 07:30:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:08.100 07:30:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:08.100 07:30:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:08.100 07:30:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:08.100 07:30:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:08.100 07:30:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:08.100 07:30:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:08.100 07:30:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:08.100 Cannot find device "nvmf_tgt_br" 00:24:08.100 07:30:09 -- nvmf/common.sh@154 -- # true 00:24:08.100 07:30:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:08.100 Cannot find device "nvmf_tgt_br2" 00:24:08.100 07:30:09 -- nvmf/common.sh@155 -- # true 00:24:08.100 07:30:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:08.100 07:30:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:08.100 Cannot find device "nvmf_tgt_br" 00:24:08.100 07:30:09 -- nvmf/common.sh@157 -- # true 00:24:08.100 07:30:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:08.100 Cannot find device "nvmf_tgt_br2" 00:24:08.100 07:30:09 -- nvmf/common.sh@158 -- # true 00:24:08.100 07:30:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:08.359 07:30:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:08.359 07:30:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:08.359 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:08.359 07:30:09 -- nvmf/common.sh@161 -- # true 00:24:08.359 07:30:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:08.359 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:08.359 07:30:09 -- nvmf/common.sh@162 -- # true 00:24:08.359 07:30:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:08.359 07:30:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:08.359 07:30:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:08.359 07:30:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:08.359 07:30:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:08.359 07:30:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:08.359 07:30:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:08.359 07:30:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:08.359 07:30:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:08.359 07:30:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:08.359 07:30:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:08.359 07:30:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:08.359 07:30:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:08.359 07:30:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:08.359 07:30:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:08.359 07:30:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:08.359 07:30:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:08.359 07:30:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:08.359 07:30:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:08.359 07:30:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:08.359 07:30:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:08.359 07:30:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:08.359 07:30:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:08.359 07:30:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:08.359 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:08.359 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:24:08.359 00:24:08.359 --- 10.0.0.2 ping statistics --- 00:24:08.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.359 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:24:08.359 07:30:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:08.359 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:08.359 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:24:08.359 00:24:08.359 --- 10.0.0.3 ping statistics --- 00:24:08.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.359 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:24:08.359 07:30:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:08.359 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:08.359 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:24:08.359 00:24:08.359 --- 10.0.0.1 ping statistics --- 00:24:08.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.359 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:24:08.359 07:30:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:08.359 07:30:10 -- nvmf/common.sh@421 -- # return 0 00:24:08.359 07:30:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:08.359 07:30:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:08.359 07:30:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:08.359 07:30:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:08.359 07:30:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:08.359 07:30:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:08.359 07:30:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:08.359 07:30:10 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:24:08.359 07:30:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:08.359 07:30:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:08.359 07:30:10 -- common/autotest_common.sh@10 -- # set +x 00:24:08.359 07:30:10 -- nvmf/common.sh@469 -- # nvmfpid=99953 00:24:08.359 07:30:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:08.360 07:30:10 -- nvmf/common.sh@470 -- # waitforlisten 99953 00:24:08.360 07:30:10 -- common/autotest_common.sh@819 -- # '[' -z 99953 ']' 00:24:08.360 07:30:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.360 07:30:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:08.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.360 07:30:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.360 07:30:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:08.360 07:30:10 -- common/autotest_common.sh@10 -- # set +x 00:24:08.618 [2024-11-04 07:30:10.245992] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:24:08.618 [2024-11-04 07:30:10.246062] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:08.618 [2024-11-04 07:30:10.382974] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:08.618 [2024-11-04 07:30:10.453187] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:08.618 [2024-11-04 07:30:10.453370] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:08.618 [2024-11-04 07:30:10.453386] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:08.618 [2024-11-04 07:30:10.453398] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:08.618 [2024-11-04 07:30:10.453581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.618 [2024-11-04 07:30:10.453600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.595 07:30:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:09.595 07:30:11 -- common/autotest_common.sh@852 -- # return 0 00:24:09.595 07:30:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:09.595 07:30:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:09.595 07:30:11 -- common/autotest_common.sh@10 -- # set +x 00:24:09.595 07:30:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:09.595 07:30:11 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:09.595 07:30:11 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:09.853 [2024-11-04 07:30:11.600072] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:09.853 07:30:11 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:10.112 Malloc0 00:24:10.112 07:30:11 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:10.370 07:30:12 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:10.629 07:30:12 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:10.888 [2024-11-04 07:30:12.584228] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:10.888 07:30:12 -- host/timeout.sh@32 -- # bdevperf_pid=100051 00:24:10.888 07:30:12 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:10.888 07:30:12 -- host/timeout.sh@34 -- # waitforlisten 100051 /var/tmp/bdevperf.sock 00:24:10.888 07:30:12 -- common/autotest_common.sh@819 -- # '[' -z 100051 ']' 00:24:10.888 07:30:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:10.888 07:30:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:10.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:10.888 07:30:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:10.888 07:30:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:10.888 07:30:12 -- common/autotest_common.sh@10 -- # set +x 00:24:10.888 [2024-11-04 07:30:12.657086] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:24:10.888 [2024-11-04 07:30:12.657190] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100051 ] 00:24:11.147 [2024-11-04 07:30:12.788949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.147 [2024-11-04 07:30:12.870638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:12.082 07:30:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:12.082 07:30:13 -- common/autotest_common.sh@852 -- # return 0 00:24:12.082 07:30:13 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:12.341 07:30:13 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:12.599 NVMe0n1 00:24:12.599 07:30:14 -- host/timeout.sh@51 -- # rpc_pid=100093 00:24:12.599 07:30:14 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:12.599 07:30:14 -- host/timeout.sh@53 -- # sleep 1 00:24:12.599 Running I/O for 10 seconds... 00:24:13.535 07:30:15 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:13.795 [2024-11-04 07:30:15.416874] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d9490 is same with the state(5) to be set 00:24:13.795 [2024-11-04 07:30:15.416960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d9490 is same with the state(5) to be set 00:24:13.796 [2024-11-04 07:30:15.416972] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d9490 is same with the state(5) to be set 00:24:13.796 [2024-11-04 07:30:15.416980] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d9490 is same with the state(5) to be set 00:24:13.796 [2024-11-04 07:30:15.416988] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d9490 is same with the state(5) to be set 00:24:13.796 [2024-11-04 07:30:15.416995] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d9490 is same with the state(5) to be set 00:24:13.796 [2024-11-04 07:30:15.417002] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d9490 is same with the state(5) to be set 00:24:13.796 [2024-11-04 07:30:15.417010] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d9490 is same with the state(5) to be set 00:24:13.796 [2024-11-04 07:30:15.417018] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d9490 is same with the state(5) to be set 00:24:13.796 [2024-11-04 07:30:15.417026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d9490 is same with the state(5) to be set 00:24:13.796 [2024-11-04 07:30:15.417033] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d9490 is same with the state(5) to be set 00:24:13.796 [2024-11-04 07:30:15.417040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d9490 is same with the state(5) to be set 00:24:13.796 [2024-11-04 07:30:15.417048] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d9490 is same with the state(5) to be set 00:24:13.796 [2024-11-04 07:30:15.417055] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d9490 is same with the state(5) to be set 00:24:13.796 [2024-11-04 07:30:15.417062] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d9490 is same with the state(5) to be set 00:24:13.796 [2024-11-04 07:30:15.417069] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d9490 is same with the state(5) to be set 00:24:13.796 [2024-11-04 07:30:15.417076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d9490 is same with the state(5) to be set 00:24:13.796 [2024-11-04 07:30:15.417083] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d9490 is same with the state(5) to be set 00:24:13.796 [2024-11-04 07:30:15.417090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d9490 is same with the state(5) to be set 00:24:13.796 [2024-11-04 07:30:15.417097] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d9490 is same with the state(5) to be set 00:24:13.796 [2024-11-04 07:30:15.417106] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d9490 is same with the state(5) to be set 00:24:13.796 [2024-11-04 07:30:15.417114] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d9490 is same with the state(5) to be set 00:24:13.796 [2024-11-04 07:30:15.417121] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d9490 is same with the state(5) to be set 00:24:13.796 [2024-11-04 07:30:15.417129] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d9490 is same with the state(5) to be set 00:24:13.796 [2024-11-04 07:30:15.417137] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d9490 is same with the state(5) to be set 00:24:13.796 [2024-11-04 07:30:15.417144] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d9490 is same with the state(5) to be set 00:24:13.796 [2024-11-04 07:30:15.417152] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d9490 is same with the state(5) to be set 00:24:13.796 [2024-11-04 07:30:15.417160] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d9490 is same with the state(5) to be set 00:24:13.796 [2024-11-04 07:30:15.417169] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d9490 is same with the state(5) to be set 00:24:13.796 [2024-11-04 07:30:15.417177] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d9490 is same with the state(5) to be set 00:24:13.796 [2024-11-04 07:30:15.417184] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d9490 is same with the state(5) to be set 00:24:13.796 [2024-11-04 07:30:15.417208] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d9490 is same with the state(5) to be set 00:24:13.796 [2024-11-04 07:30:15.417215] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d9490 is same with the state(5) to be set 00:24:13.796 [2024-11-04 07:30:15.417222] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d9490 is same with the state(5) to be set 00:24:13.796 [2024-11-04 07:30:15.417535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:127752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.417563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.417583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:127760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.417592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.417602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:127768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.417610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.417619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:127792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.417626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.417635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:127800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.417642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.417651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:127808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.417658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.417667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:127816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.417674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.417683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:127832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.417690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.417699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:127128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.417706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.417715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:127160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.417723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.417732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:127184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.417739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.417748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:127200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.417756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.417765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:127208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.417772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.417781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:127216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.417789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.417797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:127232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.417804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.417814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:127240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.417821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.417831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:127856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.417840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.417849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:127864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.417857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.417867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:127872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.417875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.417912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:127888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.417935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.417946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:127272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.417954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.417963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:127288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.417971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.417981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:127312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.417990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.417999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:127320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.418007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.418017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:127328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.418024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.418033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:127368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.418041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.418050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:127376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.418058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.418068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:127384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.418076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.418086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:127896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.418093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.418103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:127944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.418110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.418120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:127960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.418127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.418136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.418144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.418155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:127976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.418163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.418173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.418181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.418190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.418215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.418225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:128016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.796 [2024-11-04 07:30:15.418235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.418244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.796 [2024-11-04 07:30:15.418255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.418264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:128032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.418278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.418287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:128040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.418299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.418309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.796 [2024-11-04 07:30:15.418324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.418333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.418340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.418349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:128064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.418356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.418365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:128072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.418372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.418381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.796 [2024-11-04 07:30:15.418389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.418398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:128088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.796 [2024-11-04 07:30:15.418405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.418413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:127392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.418421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.418430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:127408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.418437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.418446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:127448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.418454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.418464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:127464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.418472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.418481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:127472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.418489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.418497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:127512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.418505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.418514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:127520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.418522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.418531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:127528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.418539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.418548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:127536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.418555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.418564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:127544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.418572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.418581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:127640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.418588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.418625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:127664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.418634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.418644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:127680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.418652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.418661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:127696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.796 [2024-11-04 07:30:15.418669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.796 [2024-11-04 07:30:15.418678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:127704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.418686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.418696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:127712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.418704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.418713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:128096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.797 [2024-11-04 07:30:15.418721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.418731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.418738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.418748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:128112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.418756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.418766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:128120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.797 [2024-11-04 07:30:15.418775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.418785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:128128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.797 [2024-11-04 07:30:15.418793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.418803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:128136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.797 [2024-11-04 07:30:15.418811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.418820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:128144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.418828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.418837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:128152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.797 [2024-11-04 07:30:15.418845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.418855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.418862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.418872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.797 [2024-11-04 07:30:15.418880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.418897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:128176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.797 [2024-11-04 07:30:15.418907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.418917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.418926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.418935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:128192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.418943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.418953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:128200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.418960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.418970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.797 [2024-11-04 07:30:15.418977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.418986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:128216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.418994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.419022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.419039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.797 [2024-11-04 07:30:15.419065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:128248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.797 [2024-11-04 07:30:15.419082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.419100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.419117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.419145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:128280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.419179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:128288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.419202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:128296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.797 [2024-11-04 07:30:15.419225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:127736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.419252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:127744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.419268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:127776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.419285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:127784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.419302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:127824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.419318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:127840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.419334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:127848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.419351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:127880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.419367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.797 [2024-11-04 07:30:15.419389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:128312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.419406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:128320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.419422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:128328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.419439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.797 [2024-11-04 07:30:15.419463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:128344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.797 [2024-11-04 07:30:15.419480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:128352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.797 [2024-11-04 07:30:15.419496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.419513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:128368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.419530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:128376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.419557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:128384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.419573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:128392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.797 [2024-11-04 07:30:15.419589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:128400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.419605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:128408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.797 [2024-11-04 07:30:15.419621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:128416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.797 [2024-11-04 07:30:15.419644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.419660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.797 [2024-11-04 07:30:15.419681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:128440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.797 [2024-11-04 07:30:15.419697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.419713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.419728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:128464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.419744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:128472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.797 [2024-11-04 07:30:15.419760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.797 [2024-11-04 07:30:15.419776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.797 [2024-11-04 07:30:15.419792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:128496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.797 [2024-11-04 07:30:15.419807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:127904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.419823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:127912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.419839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:127920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.419855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:127928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.419896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:127936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.419932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:127952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.419960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:127984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.797 [2024-11-04 07:30:15.419978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.419993] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2780 is same with the state(5) to be set 00:24:13.797 [2024-11-04 07:30:15.420004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:13.797 [2024-11-04 07:30:15.420011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:13.797 [2024-11-04 07:30:15.420019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127992 len:8 PRP1 0x0 PRP2 0x0 00:24:13.797 [2024-11-04 07:30:15.420026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.797 [2024-11-04 07:30:15.420087] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x7d2780 was disconnected and freed. reset controller. 00:24:13.797 [2024-11-04 07:30:15.420325] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:13.797 [2024-11-04 07:30:15.420395] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74d8c0 (9): Bad file descriptor 00:24:13.797 [2024-11-04 07:30:15.420501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:13.797 [2024-11-04 07:30:15.420545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:13.797 [2024-11-04 07:30:15.420560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74d8c0 with addr=10.0.0.2, port=4420 00:24:13.797 [2024-11-04 07:30:15.420571] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d8c0 is same with the state(5) to be set 00:24:13.797 [2024-11-04 07:30:15.420587] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74d8c0 (9): Bad file descriptor 00:24:13.797 [2024-11-04 07:30:15.420601] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:13.797 [2024-11-04 07:30:15.420610] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:13.797 [2024-11-04 07:30:15.420621] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:13.797 [2024-11-04 07:30:15.420653] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:13.797 [2024-11-04 07:30:15.420663] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:13.797 07:30:15 -- host/timeout.sh@56 -- # sleep 2 00:24:15.700 [2024-11-04 07:30:17.420723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.700 [2024-11-04 07:30:17.420794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.700 [2024-11-04 07:30:17.420811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74d8c0 with addr=10.0.0.2, port=4420 00:24:15.700 [2024-11-04 07:30:17.420821] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d8c0 is same with the state(5) to be set 00:24:15.700 [2024-11-04 07:30:17.420839] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74d8c0 (9): Bad file descriptor 00:24:15.700 [2024-11-04 07:30:17.420854] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:15.700 [2024-11-04 07:30:17.420862] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:15.700 [2024-11-04 07:30:17.420870] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.700 [2024-11-04 07:30:17.420902] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.700 [2024-11-04 07:30:17.420919] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:15.700 07:30:17 -- host/timeout.sh@57 -- # get_controller 00:24:15.700 07:30:17 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:15.700 07:30:17 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:15.959 07:30:17 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:24:15.959 07:30:17 -- host/timeout.sh@58 -- # get_bdev 00:24:15.959 07:30:17 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:15.959 07:30:17 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:16.218 07:30:17 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:24:16.218 07:30:17 -- host/timeout.sh@61 -- # sleep 5 00:24:17.594 [2024-11-04 07:30:19.420977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.594 [2024-11-04 07:30:19.421036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.594 [2024-11-04 07:30:19.421053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74d8c0 with addr=10.0.0.2, port=4420 00:24:17.594 [2024-11-04 07:30:19.421062] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d8c0 is same with the state(5) to be set 00:24:17.594 [2024-11-04 07:30:19.421079] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74d8c0 (9): Bad file descriptor 00:24:17.594 [2024-11-04 07:30:19.421093] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.594 [2024-11-04 07:30:19.421102] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.594 [2024-11-04 07:30:19.421110] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.594 [2024-11-04 07:30:19.421127] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.594 [2024-11-04 07:30:19.421137] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:20.126 [2024-11-04 07:30:21.421279] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:20.126 [2024-11-04 07:30:21.421322] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:20.126 [2024-11-04 07:30:21.421332] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:20.126 [2024-11-04 07:30:21.421340] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:24:20.126 [2024-11-04 07:30:21.421363] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:20.693 00:24:20.693 Latency(us) 00:24:20.693 [2024-11-04T07:30:22.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.693 [2024-11-04T07:30:22.534Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:20.693 Verification LBA range: start 0x0 length 0x4000 00:24:20.693 NVMe0n1 : 8.09 1968.51 7.69 15.81 0.00 64425.86 2308.65 7015926.69 00:24:20.693 [2024-11-04T07:30:22.534Z] =================================================================================================================== 00:24:20.693 [2024-11-04T07:30:22.534Z] Total : 1968.51 7.69 15.81 0.00 64425.86 2308.65 7015926.69 00:24:20.693 0 00:24:21.260 07:30:22 -- host/timeout.sh@62 -- # get_controller 00:24:21.261 07:30:22 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:21.261 07:30:22 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:21.519 07:30:23 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:24:21.519 07:30:23 -- host/timeout.sh@63 -- # get_bdev 00:24:21.519 07:30:23 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:21.519 07:30:23 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:21.778 07:30:23 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:24:21.778 07:30:23 -- host/timeout.sh@65 -- # wait 100093 00:24:21.778 07:30:23 -- host/timeout.sh@67 -- # killprocess 100051 00:24:21.778 07:30:23 -- common/autotest_common.sh@926 -- # '[' -z 100051 ']' 00:24:21.778 07:30:23 -- common/autotest_common.sh@930 -- # kill -0 100051 00:24:21.778 07:30:23 -- common/autotest_common.sh@931 -- # uname 00:24:21.778 07:30:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:21.778 07:30:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 100051 00:24:21.778 07:30:23 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:24:21.778 07:30:23 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:24:21.778 killing process with pid 100051 00:24:21.778 07:30:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 100051' 00:24:21.778 Received shutdown signal, test time was about 9.198039 seconds 00:24:21.778 00:24:21.778 Latency(us) 00:24:21.778 [2024-11-04T07:30:23.619Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.778 [2024-11-04T07:30:23.619Z] =================================================================================================================== 00:24:21.778 [2024-11-04T07:30:23.619Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:21.778 07:30:23 -- common/autotest_common.sh@945 -- # kill 100051 00:24:21.778 07:30:23 -- common/autotest_common.sh@950 -- # wait 100051 00:24:22.037 07:30:23 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:22.296 [2024-11-04 07:30:24.030686] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:22.296 07:30:24 -- host/timeout.sh@74 -- # bdevperf_pid=100252 00:24:22.296 07:30:24 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:22.296 07:30:24 -- host/timeout.sh@76 -- # waitforlisten 100252 /var/tmp/bdevperf.sock 00:24:22.296 07:30:24 -- common/autotest_common.sh@819 -- # '[' -z 100252 ']' 00:24:22.296 07:30:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:22.296 07:30:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:22.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:22.296 07:30:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:22.296 07:30:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:22.296 07:30:24 -- common/autotest_common.sh@10 -- # set +x 00:24:22.296 [2024-11-04 07:30:24.089770] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:24:22.296 [2024-11-04 07:30:24.089847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100252 ] 00:24:22.554 [2024-11-04 07:30:24.214466] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.554 [2024-11-04 07:30:24.283817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:23.491 07:30:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:23.491 07:30:24 -- common/autotest_common.sh@852 -- # return 0 00:24:23.491 07:30:24 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:23.491 07:30:25 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:24:23.750 NVMe0n1 00:24:23.750 07:30:25 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:23.750 07:30:25 -- host/timeout.sh@84 -- # rpc_pid=100294 00:24:23.750 07:30:25 -- host/timeout.sh@86 -- # sleep 1 00:24:23.750 Running I/O for 10 seconds... 00:24:24.686 07:30:26 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:24.948 [2024-11-04 07:30:26.686627] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.686705] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.686731] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.686740] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.686748] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.686757] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.686765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.686773] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.686781] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.686788] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.686796] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.686804] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.686812] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.686820] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.686827] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.686834] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.686842] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.686850] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.686858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.686866] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.686873] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.686881] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.686898] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.686907] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.686915] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.686923] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.686931] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.686953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.686961] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.686968] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.686975] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.686983] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.686990] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.686998] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.687023] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.687046] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.687054] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.687062] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.687085] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.687092] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.687100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.687107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.687115] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.948 [2024-11-04 07:30:26.687122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.949 [2024-11-04 07:30:26.687131] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.949 [2024-11-04 07:30:26.687139] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.949 [2024-11-04 07:30:26.687147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.949 [2024-11-04 07:30:26.687171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.949 [2024-11-04 07:30:26.687179] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.949 [2024-11-04 07:30:26.687186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.949 [2024-11-04 07:30:26.687193] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.949 [2024-11-04 07:30:26.687215] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.949 [2024-11-04 07:30:26.687222] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.949 [2024-11-04 07:30:26.687228] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.949 [2024-11-04 07:30:26.687235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.949 [2024-11-04 07:30:26.687242] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.949 [2024-11-04 07:30:26.687250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.949 [2024-11-04 07:30:26.687256] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.949 [2024-11-04 07:30:26.687263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.949 [2024-11-04 07:30:26.687270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.949 [2024-11-04 07:30:26.687277] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.949 [2024-11-04 07:30:26.687284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.949 [2024-11-04 07:30:26.687292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.949 [2024-11-04 07:30:26.687299] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.949 [2024-11-04 07:30:26.687306] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.949 [2024-11-04 07:30:26.687313] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.949 [2024-11-04 07:30:26.687321] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.949 [2024-11-04 07:30:26.687329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.949 [2024-11-04 07:30:26.687336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.949 [2024-11-04 07:30:26.687343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.949 [2024-11-04 07:30:26.687351] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.949 [2024-11-04 07:30:26.687359] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.949 [2024-11-04 07:30:26.687367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167eca0 is same with the state(5) to be set 00:24:24.949 [2024-11-04 07:30:26.687775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.949 [2024-11-04 07:30:26.687813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.949 [2024-11-04 07:30:26.687830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.949 [2024-11-04 07:30:26.687840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.949 [2024-11-04 07:30:26.687850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.949 [2024-11-04 07:30:26.687859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.949 [2024-11-04 07:30:26.687869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.949 [2024-11-04 07:30:26.687895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.949 [2024-11-04 07:30:26.687906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.949 [2024-11-04 07:30:26.687919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.949 [2024-11-04 07:30:26.687929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.949 [2024-11-04 07:30:26.687937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.949 [2024-11-04 07:30:26.687946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.949 [2024-11-04 07:30:26.687954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.949 [2024-11-04 07:30:26.687963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.949 [2024-11-04 07:30:26.687971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.949 [2024-11-04 07:30:26.687980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.949 [2024-11-04 07:30:26.687987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.949 [2024-11-04 07:30:26.687996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.949 [2024-11-04 07:30:26.688012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.949 [2024-11-04 07:30:26.688024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.949 [2024-11-04 07:30:26.688032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.949 [2024-11-04 07:30:26.688041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.949 [2024-11-04 07:30:26.688048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.949 [2024-11-04 07:30:26.688057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.949 [2024-11-04 07:30:26.688065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.949 [2024-11-04 07:30:26.688075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.949 [2024-11-04 07:30:26.688082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.949 [2024-11-04 07:30:26.688091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.949 [2024-11-04 07:30:26.688098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.949 [2024-11-04 07:30:26.688107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.949 [2024-11-04 07:30:26.688115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.949 [2024-11-04 07:30:26.688124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.949 [2024-11-04 07:30:26.688133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.949 [2024-11-04 07:30:26.688144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.949 [2024-11-04 07:30:26.688152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.949 [2024-11-04 07:30:26.688161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.949 [2024-11-04 07:30:26.688168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.949 [2024-11-04 07:30:26.688187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.949 [2024-11-04 07:30:26.688195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.949 [2024-11-04 07:30:26.688204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.949 [2024-11-04 07:30:26.688212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.949 [2024-11-04 07:30:26.688220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.949 [2024-11-04 07:30:26.688227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.949 [2024-11-04 07:30:26.688236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.949 [2024-11-04 07:30:26.688243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.949 [2024-11-04 07:30:26.688252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.949 [2024-11-04 07:30:26.688259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.949 [2024-11-04 07:30:26.688268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.949 [2024-11-04 07:30:26.688275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.949 [2024-11-04 07:30:26.688284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.949 [2024-11-04 07:30:26.688292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.950 [2024-11-04 07:30:26.688314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.950 [2024-11-04 07:30:26.688337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.950 [2024-11-04 07:30:26.688353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.950 [2024-11-04 07:30:26.688369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.950 [2024-11-04 07:30:26.688386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.950 [2024-11-04 07:30:26.688402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.950 [2024-11-04 07:30:26.688419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.950 [2024-11-04 07:30:26.688436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.950 [2024-11-04 07:30:26.688454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.950 [2024-11-04 07:30:26.688471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.950 [2024-11-04 07:30:26.688488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.950 [2024-11-04 07:30:26.688504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.950 [2024-11-04 07:30:26.688520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.950 [2024-11-04 07:30:26.688536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.950 [2024-11-04 07:30:26.688552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.950 [2024-11-04 07:30:26.688568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.950 [2024-11-04 07:30:26.688584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.950 [2024-11-04 07:30:26.688600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.950 [2024-11-04 07:30:26.688616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.950 [2024-11-04 07:30:26.688632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.950 [2024-11-04 07:30:26.688648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.950 [2024-11-04 07:30:26.688669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.950 [2024-11-04 07:30:26.688686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.950 [2024-11-04 07:30:26.688702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.950 [2024-11-04 07:30:26.688718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.950 [2024-11-04 07:30:26.688734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.950 [2024-11-04 07:30:26.688750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.950 [2024-11-04 07:30:26.688766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.950 [2024-11-04 07:30:26.688781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.950 [2024-11-04 07:30:26.688796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.950 [2024-11-04 07:30:26.688812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.950 [2024-11-04 07:30:26.688828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.950 [2024-11-04 07:30:26.688844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.950 [2024-11-04 07:30:26.688860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.950 [2024-11-04 07:30:26.688900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.950 [2024-11-04 07:30:26.688917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.950 [2024-11-04 07:30:26.688932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.950 [2024-11-04 07:30:26.688954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.950 [2024-11-04 07:30:26.688970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.950 [2024-11-04 07:30:26.688979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.950 [2024-11-04 07:30:26.688986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.688996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.951 [2024-11-04 07:30:26.689004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.689013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.951 [2024-11-04 07:30:26.689026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.689035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.951 [2024-11-04 07:30:26.689043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.689051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.951 [2024-11-04 07:30:26.689059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.689068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.951 [2024-11-04 07:30:26.689075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.689085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.951 [2024-11-04 07:30:26.689092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.689101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.951 [2024-11-04 07:30:26.689108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.689117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.951 [2024-11-04 07:30:26.689124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.689133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.951 [2024-11-04 07:30:26.689308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.689321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.951 [2024-11-04 07:30:26.689329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.689338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.951 [2024-11-04 07:30:26.689346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.689354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.951 [2024-11-04 07:30:26.689369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.689379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.951 [2024-11-04 07:30:26.689387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.689395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.951 [2024-11-04 07:30:26.689409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.689418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.951 [2024-11-04 07:30:26.689426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.689436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.951 [2024-11-04 07:30:26.689443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.689452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.951 [2024-11-04 07:30:26.689459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.689468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.951 [2024-11-04 07:30:26.689475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.689484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.951 [2024-11-04 07:30:26.689491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.689500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.951 [2024-11-04 07:30:26.689507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.689516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.951 [2024-11-04 07:30:26.689523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.689533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.951 [2024-11-04 07:30:26.689540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.689549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.951 [2024-11-04 07:30:26.689557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.689565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.951 [2024-11-04 07:30:26.689572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.689581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.951 [2024-11-04 07:30:26.689588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.689597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.951 [2024-11-04 07:30:26.689605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.689615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.951 [2024-11-04 07:30:26.689622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.689631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.951 [2024-11-04 07:30:26.689638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.689647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.951 [2024-11-04 07:30:26.689656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.689665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.951 [2024-11-04 07:30:26.689694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.689705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.951 [2024-11-04 07:30:26.689713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.689722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.951 [2024-11-04 07:30:26.689730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.689740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.951 [2024-11-04 07:30:26.689747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.689756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.951 [2024-11-04 07:30:26.689763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.689772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.951 [2024-11-04 07:30:26.689779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.689788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.951 [2024-11-04 07:30:26.689796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.689805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.951 [2024-11-04 07:30:26.689812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.689821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.951 [2024-11-04 07:30:26.689828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.689838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.951 [2024-11-04 07:30:26.689845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.951 [2024-11-04 07:30:26.689854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.951 [2024-11-04 07:30:26.689861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.952 [2024-11-04 07:30:26.689895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.952 [2024-11-04 07:30:26.689905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.952 [2024-11-04 07:30:26.689914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.952 [2024-11-04 07:30:26.689922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.952 [2024-11-04 07:30:26.689930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.952 [2024-11-04 07:30:26.689938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.952 [2024-11-04 07:30:26.689947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.952 [2024-11-04 07:30:26.689954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.952 [2024-11-04 07:30:26.689963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.952 [2024-11-04 07:30:26.689970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.952 [2024-11-04 07:30:26.689979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.952 [2024-11-04 07:30:26.689999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.952 [2024-11-04 07:30:26.690009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.952 [2024-11-04 07:30:26.690016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.952 [2024-11-04 07:30:26.690025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.952 [2024-11-04 07:30:26.690033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.952 [2024-11-04 07:30:26.690042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.952 [2024-11-04 07:30:26.690050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.952 [2024-11-04 07:30:26.690058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.952 [2024-11-04 07:30:26.690065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.952 [2024-11-04 07:30:26.690074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.952 [2024-11-04 07:30:26.690082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.952 [2024-11-04 07:30:26.690091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.952 [2024-11-04 07:30:26.690098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.952 [2024-11-04 07:30:26.690107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.952 [2024-11-04 07:30:26.690115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.952 [2024-11-04 07:30:26.690123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.952 [2024-11-04 07:30:26.690130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.952 [2024-11-04 07:30:26.690139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.952 [2024-11-04 07:30:26.690146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.952 [2024-11-04 07:30:26.690155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.952 [2024-11-04 07:30:26.690162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.952 [2024-11-04 07:30:26.690171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.952 [2024-11-04 07:30:26.690178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.952 [2024-11-04 07:30:26.690187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.952 [2024-11-04 07:30:26.690194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.952 [2024-11-04 07:30:26.690203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.952 [2024-11-04 07:30:26.690210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.952 [2024-11-04 07:30:26.690218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.952 [2024-11-04 07:30:26.690225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.952 [2024-11-04 07:30:26.690234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.952 [2024-11-04 07:30:26.690247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.952 [2024-11-04 07:30:26.690256] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998660 is same with the state(5) to be set 00:24:24.952 [2024-11-04 07:30:26.690276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:24.952 [2024-11-04 07:30:26.690283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:24.952 [2024-11-04 07:30:26.690290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5208 len:8 PRP1 0x0 PRP2 0x0 00:24:24.952 [2024-11-04 07:30:26.690298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.952 [2024-11-04 07:30:26.690346] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x998660 was disconnected and freed. reset controller. 00:24:24.952 [2024-11-04 07:30:26.690543] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.952 [2024-11-04 07:30:26.690625] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9138c0 (9): Bad file descriptor 00:24:24.952 [2024-11-04 07:30:26.690733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.952 [2024-11-04 07:30:26.690775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.952 [2024-11-04 07:30:26.690790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9138c0 with addr=10.0.0.2, port=4420 00:24:24.952 [2024-11-04 07:30:26.690799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9138c0 is same with the state(5) to be set 00:24:24.952 [2024-11-04 07:30:26.690815] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9138c0 (9): Bad file descriptor 00:24:24.952 [2024-11-04 07:30:26.690829] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.952 [2024-11-04 07:30:26.690837] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.952 [2024-11-04 07:30:26.690846] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.952 [2024-11-04 07:30:26.690862] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.952 [2024-11-04 07:30:26.690884] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.952 07:30:26 -- host/timeout.sh@90 -- # sleep 1 00:24:25.940 [2024-11-04 07:30:27.690951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.940 [2024-11-04 07:30:27.691025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.940 [2024-11-04 07:30:27.691042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9138c0 with addr=10.0.0.2, port=4420 00:24:25.940 [2024-11-04 07:30:27.691052] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9138c0 is same with the state(5) to be set 00:24:25.940 [2024-11-04 07:30:27.691069] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9138c0 (9): Bad file descriptor 00:24:25.940 [2024-11-04 07:30:27.691083] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.940 [2024-11-04 07:30:27.691092] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.940 [2024-11-04 07:30:27.691100] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.940 [2024-11-04 07:30:27.691117] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.940 [2024-11-04 07:30:27.691126] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.940 07:30:27 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:26.212 [2024-11-04 07:30:27.942002] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:26.212 07:30:27 -- host/timeout.sh@92 -- # wait 100294 00:24:27.147 [2024-11-04 07:30:28.706523] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:33.709 00:24:33.709 Latency(us) 00:24:33.709 [2024-11-04T07:30:35.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.709 [2024-11-04T07:30:35.550Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:33.709 Verification LBA range: start 0x0 length 0x4000 00:24:33.709 NVMe0n1 : 10.01 10654.57 41.62 0.00 0.00 11997.41 1280.93 3019898.88 00:24:33.709 [2024-11-04T07:30:35.550Z] =================================================================================================================== 00:24:33.709 [2024-11-04T07:30:35.550Z] Total : 10654.57 41.62 0.00 0.00 11997.41 1280.93 3019898.88 00:24:33.709 0 00:24:33.709 07:30:35 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:33.709 07:30:35 -- host/timeout.sh@97 -- # rpc_pid=100415 00:24:33.709 07:30:35 -- host/timeout.sh@98 -- # sleep 1 00:24:33.967 Running I/O for 10 seconds... 00:24:34.903 07:30:36 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:35.165 [2024-11-04 07:30:36.793676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.793741] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.793770] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.793778] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.793787] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.793795] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.793802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.793810] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.793818] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.793825] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.793832] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.793839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.793847] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.793854] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.793861] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.793869] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.793876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.793900] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.793921] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.793930] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.793938] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.793946] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.793954] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.793962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.793970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.793978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.793986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.793993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.794002] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.794010] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.794018] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.794026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.794034] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.794043] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.794053] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.794061] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.794071] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.794080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.794088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.794097] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14da110 is same with the state(5) to be set 00:24:35.165 [2024-11-04 07:30:36.794545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.165 [2024-11-04 07:30:36.794588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.165 [2024-11-04 07:30:36.794615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.165 [2024-11-04 07:30:36.794633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.165 [2024-11-04 07:30:36.794644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.165 [2024-11-04 07:30:36.794651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.165 [2024-11-04 07:30:36.794660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.165 [2024-11-04 07:30:36.794668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.165 [2024-11-04 07:30:36.794677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.165 [2024-11-04 07:30:36.794685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.165 [2024-11-04 07:30:36.794694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.165 [2024-11-04 07:30:36.794701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.165 [2024-11-04 07:30:36.794710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.165 [2024-11-04 07:30:36.794717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.165 [2024-11-04 07:30:36.794726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.165 [2024-11-04 07:30:36.794734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.165 [2024-11-04 07:30:36.794743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.165 [2024-11-04 07:30:36.794750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.165 [2024-11-04 07:30:36.794759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.165 [2024-11-04 07:30:36.794767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.165 [2024-11-04 07:30:36.794776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.165 [2024-11-04 07:30:36.794783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.794792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.166 [2024-11-04 07:30:36.794800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.794808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.166 [2024-11-04 07:30:36.794815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.794824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.166 [2024-11-04 07:30:36.794831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.794840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.166 [2024-11-04 07:30:36.794847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.794856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.166 [2024-11-04 07:30:36.794863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.794882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.166 [2024-11-04 07:30:36.794894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.794905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.166 [2024-11-04 07:30:36.794913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.794922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.166 [2024-11-04 07:30:36.794930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.794940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.166 [2024-11-04 07:30:36.794948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.794957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.166 [2024-11-04 07:30:36.794964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.794974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.166 [2024-11-04 07:30:36.794981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.794990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.166 [2024-11-04 07:30:36.794997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.795005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.166 [2024-11-04 07:30:36.795014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.795023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.166 [2024-11-04 07:30:36.795031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.795039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.166 [2024-11-04 07:30:36.795047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.795056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.166 [2024-11-04 07:30:36.795063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.795072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.166 [2024-11-04 07:30:36.795080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.795088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.166 [2024-11-04 07:30:36.795095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.795104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.166 [2024-11-04 07:30:36.795111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.795120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.166 [2024-11-04 07:30:36.795128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.795136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.166 [2024-11-04 07:30:36.795143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.795153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.166 [2024-11-04 07:30:36.795161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.795169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.166 [2024-11-04 07:30:36.795177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.795186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.166 [2024-11-04 07:30:36.795193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.795202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.166 [2024-11-04 07:30:36.795209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.795223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.166 [2024-11-04 07:30:36.795230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.795242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.166 [2024-11-04 07:30:36.795249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.795266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.166 [2024-11-04 07:30:36.795274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.795291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.166 [2024-11-04 07:30:36.795298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.795307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.166 [2024-11-04 07:30:36.795314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.795323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.166 [2024-11-04 07:30:36.795330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.795339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.166 [2024-11-04 07:30:36.795347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.795355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.166 [2024-11-04 07:30:36.795362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.795371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.166 [2024-11-04 07:30:36.795378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.795387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.166 [2024-11-04 07:30:36.795394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.795403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.166 [2024-11-04 07:30:36.795410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.795419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.166 [2024-11-04 07:30:36.795428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.795438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.166 [2024-11-04 07:30:36.795446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.795455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.166 [2024-11-04 07:30:36.795464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.166 [2024-11-04 07:30:36.795472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.167 [2024-11-04 07:30:36.795480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.167 [2024-11-04 07:30:36.795488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.167 [2024-11-04 07:30:36.795496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.167 [2024-11-04 07:30:36.795504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.167 [2024-11-04 07:30:36.795512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.167 [2024-11-04 07:30:36.795521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.167 [2024-11-04 07:30:36.795528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.167 [2024-11-04 07:30:36.795537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.167 [2024-11-04 07:30:36.795544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.167 [2024-11-04 07:30:36.795553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.167 [2024-11-04 07:30:36.795560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.167 [2024-11-04 07:30:36.795569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.167 [2024-11-04 07:30:36.795576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.167 [2024-11-04 07:30:36.795585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.167 [2024-11-04 07:30:36.795593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.167 [2024-11-04 07:30:36.795608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.167 [2024-11-04 07:30:36.795615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.167 [2024-11-04 07:30:36.795624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.167 [2024-11-04 07:30:36.795632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.167 [2024-11-04 07:30:36.795641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.167 [2024-11-04 07:30:36.795648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.167 [2024-11-04 07:30:36.795657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.167 [2024-11-04 07:30:36.795664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.167 [2024-11-04 07:30:36.795673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.167 [2024-11-04 07:30:36.795680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.167 [2024-11-04 07:30:36.795698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.167 [2024-11-04 07:30:36.795705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.167 [2024-11-04 07:30:36.795723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.167 [2024-11-04 07:30:36.795731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.167 [2024-11-04 07:30:36.795741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.167 [2024-11-04 07:30:36.795748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.167 [2024-11-04 07:30:36.795758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.167 [2024-11-04 07:30:36.795765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.167 [2024-11-04 07:30:36.795775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.167 [2024-11-04 07:30:36.795783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.167 [2024-11-04 07:30:36.795792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.167 [2024-11-04 07:30:36.795799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.167 [2024-11-04 07:30:36.795808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.167 [2024-11-04 07:30:36.795816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.167 [2024-11-04 07:30:36.795825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.167 [2024-11-04 07:30:36.795833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.167 [2024-11-04 07:30:36.795842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.167 [2024-11-04 07:30:36.795849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.167 [2024-11-04 07:30:36.795858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.167 [2024-11-04 07:30:36.795865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.167 [2024-11-04 07:30:36.795883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.167 [2024-11-04 07:30:36.795892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.167 [2024-11-04 07:30:36.795901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.167 [2024-11-04 07:30:36.795908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.167 [2024-11-04 07:30:36.795917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.167 [2024-11-04 07:30:36.795925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.167 [2024-11-04 07:30:36.795933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.167 [2024-11-04 07:30:36.795941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.167 [2024-11-04 07:30:36.795949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.167 [2024-11-04 07:30:36.795957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.167 [2024-11-04 07:30:36.795966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.167 [2024-11-04 07:30:36.795972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.167 [2024-11-04 07:30:36.795982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.167 [2024-11-04 07:30:36.795990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.167 [2024-11-04 07:30:36.795999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.167 [2024-11-04 07:30:36.796006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.167 [2024-11-04 07:30:36.796015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.167 [2024-11-04 07:30:36.796022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.167 [2024-11-04 07:30:36.796032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.167 [2024-11-04 07:30:36.796039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.167 [2024-11-04 07:30:36.796049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.167 [2024-11-04 07:30:36.796057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.167 [2024-11-04 07:30:36.796066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.167 [2024-11-04 07:30:36.796073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.167 [2024-11-04 07:30:36.796088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.167 [2024-11-04 07:30:36.796096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.167 [2024-11-04 07:30:36.796105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.167 [2024-11-04 07:30:36.796113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.167 [2024-11-04 07:30:36.796122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.167 [2024-11-04 07:30:36.796129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.167 [2024-11-04 07:30:36.796138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.167 [2024-11-04 07:30:36.796145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.167 [2024-11-04 07:30:36.796154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.168 [2024-11-04 07:30:36.796161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.168 [2024-11-04 07:30:36.796170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.168 [2024-11-04 07:30:36.796177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.168 [2024-11-04 07:30:36.796185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.168 [2024-11-04 07:30:36.796192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.168 [2024-11-04 07:30:36.796201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.168 [2024-11-04 07:30:36.796208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.168 [2024-11-04 07:30:36.796217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.168 [2024-11-04 07:30:36.796224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.168 [2024-11-04 07:30:36.796233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.168 [2024-11-04 07:30:36.796240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.168 [2024-11-04 07:30:36.796249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.168 [2024-11-04 07:30:36.796286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.168 [2024-11-04 07:30:36.796296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.168 [2024-11-04 07:30:36.796303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.168 [2024-11-04 07:30:36.796312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.168 [2024-11-04 07:30:36.796321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.168 [2024-11-04 07:30:36.796329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.168 [2024-11-04 07:30:36.796339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.168 [2024-11-04 07:30:36.796348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.168 [2024-11-04 07:30:36.796355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.168 [2024-11-04 07:30:36.796364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.168 [2024-11-04 07:30:36.796371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.168 [2024-11-04 07:30:36.796380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.168 [2024-11-04 07:30:36.796387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.168 [2024-11-04 07:30:36.796396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.168 [2024-11-04 07:30:36.796403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.168 [2024-11-04 07:30:36.796412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.168 [2024-11-04 07:30:36.796419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.168 [2024-11-04 07:30:36.796427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.168 [2024-11-04 07:30:36.796435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.168 [2024-11-04 07:30:36.796443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.168 [2024-11-04 07:30:36.796450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.168 [2024-11-04 07:30:36.796459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.168 [2024-11-04 07:30:36.796466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.168 [2024-11-04 07:30:36.796475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.168 [2024-11-04 07:30:36.796481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.168 [2024-11-04 07:30:36.796490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.168 [2024-11-04 07:30:36.796497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.168 [2024-11-04 07:30:36.796506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.168 [2024-11-04 07:30:36.796513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.168 [2024-11-04 07:30:36.796522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.168 [2024-11-04 07:30:36.796529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.168 [2024-11-04 07:30:36.796537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.168 [2024-11-04 07:30:36.796552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.168 [2024-11-04 07:30:36.796561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.168 [2024-11-04 07:30:36.796568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.168 [2024-11-04 07:30:36.796577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.168 [2024-11-04 07:30:36.796584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.168 [2024-11-04 07:30:36.796593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.168 [2024-11-04 07:30:36.796600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.168 [2024-11-04 07:30:36.796608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.168 [2024-11-04 07:30:36.796615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.168 [2024-11-04 07:30:36.796624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.168 [2024-11-04 07:30:36.796631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.168 [2024-11-04 07:30:36.796639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.168 [2024-11-04 07:30:36.796646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.168 [2024-11-04 07:30:36.796655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.168 [2024-11-04 07:30:36.796662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.168 [2024-11-04 07:30:36.796670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.168 [2024-11-04 07:30:36.796677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.168 [2024-11-04 07:30:36.796686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.168 [2024-11-04 07:30:36.796693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.168 [2024-11-04 07:30:36.796702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.168 [2024-11-04 07:30:36.796708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.168 [2024-11-04 07:30:36.796717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.168 [2024-11-04 07:30:36.796725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.168 [2024-11-04 07:30:36.796733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.168 [2024-11-04 07:30:36.796740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.168 [2024-11-04 07:30:36.796749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.168 [2024-11-04 07:30:36.796756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.168 [2024-11-04 07:30:36.796765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.168 [2024-11-04 07:30:36.796780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.168 [2024-11-04 07:30:36.796789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.168 [2024-11-04 07:30:36.796797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.168 [2024-11-04 07:30:36.796806] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9641d0 is same with the state(5) to be set 00:24:35.168 [2024-11-04 07:30:36.796824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:35.168 [2024-11-04 07:30:36.796831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:35.168 [2024-11-04 07:30:36.796838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5888 len:8 PRP1 0x0 PRP2 0x0 00:24:35.168 [2024-11-04 07:30:36.796846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.168 [2024-11-04 07:30:36.796886] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9641d0 was disconnected and freed. reset controller. 00:24:35.169 [2024-11-04 07:30:36.797063] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.169 [2024-11-04 07:30:36.797130] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9138c0 (9): Bad file descriptor 00:24:35.169 [2024-11-04 07:30:36.797209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.169 [2024-11-04 07:30:36.797252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.169 [2024-11-04 07:30:36.797266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9138c0 with addr=10.0.0.2, port=4420 00:24:35.169 [2024-11-04 07:30:36.797274] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9138c0 is same with the state(5) to be set 00:24:35.169 [2024-11-04 07:30:36.797291] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9138c0 (9): Bad file descriptor 00:24:35.169 [2024-11-04 07:30:36.797305] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.169 [2024-11-04 07:30:36.797314] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.169 [2024-11-04 07:30:36.797323] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.169 [2024-11-04 07:30:36.797341] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.169 [2024-11-04 07:30:36.797350] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.169 07:30:36 -- host/timeout.sh@101 -- # sleep 3 00:24:36.104 [2024-11-04 07:30:37.797410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.104 [2024-11-04 07:30:37.797486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.104 [2024-11-04 07:30:37.797502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9138c0 with addr=10.0.0.2, port=4420 00:24:36.104 [2024-11-04 07:30:37.797511] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9138c0 is same with the state(5) to be set 00:24:36.104 [2024-11-04 07:30:37.797528] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9138c0 (9): Bad file descriptor 00:24:36.104 [2024-11-04 07:30:37.797543] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.104 [2024-11-04 07:30:37.797552] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.104 [2024-11-04 07:30:37.797560] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.104 [2024-11-04 07:30:37.797576] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.104 [2024-11-04 07:30:37.797586] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.039 [2024-11-04 07:30:38.797645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.039 [2024-11-04 07:30:38.797719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.039 [2024-11-04 07:30:38.797735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9138c0 with addr=10.0.0.2, port=4420 00:24:37.039 [2024-11-04 07:30:38.797744] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9138c0 is same with the state(5) to be set 00:24:37.039 [2024-11-04 07:30:38.797761] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9138c0 (9): Bad file descriptor 00:24:37.039 [2024-11-04 07:30:38.797775] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.039 [2024-11-04 07:30:38.797784] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.039 [2024-11-04 07:30:38.797792] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.039 [2024-11-04 07:30:38.797809] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.039 [2024-11-04 07:30:38.797818] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.973 [2024-11-04 07:30:39.799355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.973 [2024-11-04 07:30:39.799433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.973 [2024-11-04 07:30:39.799450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9138c0 with addr=10.0.0.2, port=4420 00:24:37.973 [2024-11-04 07:30:39.799459] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9138c0 is same with the state(5) to be set 00:24:37.973 [2024-11-04 07:30:39.799588] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9138c0 (9): Bad file descriptor 00:24:37.973 [2024-11-04 07:30:39.799682] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.973 [2024-11-04 07:30:39.799692] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.973 [2024-11-04 07:30:39.799700] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.973 [2024-11-04 07:30:39.801530] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.973 [2024-11-04 07:30:39.801554] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.231 07:30:39 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:38.231 [2024-11-04 07:30:40.057767] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.488 07:30:40 -- host/timeout.sh@103 -- # wait 100415 00:24:39.054 [2024-11-04 07:30:40.818952] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:44.322 00:24:44.322 Latency(us) 00:24:44.322 [2024-11-04T07:30:46.163Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.322 [2024-11-04T07:30:46.163Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:44.322 Verification LBA range: start 0x0 length 0x4000 00:24:44.322 NVMe0n1 : 10.01 9306.63 36.35 7405.69 0.00 7648.30 629.29 3019898.88 00:24:44.322 [2024-11-04T07:30:46.163Z] =================================================================================================================== 00:24:44.322 [2024-11-04T07:30:46.163Z] Total : 9306.63 36.35 7405.69 0.00 7648.30 0.00 3019898.88 00:24:44.322 0 00:24:44.322 07:30:45 -- host/timeout.sh@105 -- # killprocess 100252 00:24:44.322 07:30:45 -- common/autotest_common.sh@926 -- # '[' -z 100252 ']' 00:24:44.322 07:30:45 -- common/autotest_common.sh@930 -- # kill -0 100252 00:24:44.322 07:30:45 -- common/autotest_common.sh@931 -- # uname 00:24:44.322 07:30:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:44.323 07:30:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 100252 00:24:44.323 killing process with pid 100252 00:24:44.323 Received shutdown signal, test time was about 10.000000 seconds 00:24:44.323 00:24:44.323 Latency(us) 00:24:44.323 [2024-11-04T07:30:46.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.323 [2024-11-04T07:30:46.164Z] =================================================================================================================== 00:24:44.323 [2024-11-04T07:30:46.164Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:44.323 07:30:45 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:24:44.323 07:30:45 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:24:44.323 07:30:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 100252' 00:24:44.323 07:30:45 -- common/autotest_common.sh@945 -- # kill 100252 00:24:44.323 07:30:45 -- common/autotest_common.sh@950 -- # wait 100252 00:24:44.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:44.323 07:30:45 -- host/timeout.sh@110 -- # bdevperf_pid=100537 00:24:44.323 07:30:45 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:24:44.323 07:30:45 -- host/timeout.sh@112 -- # waitforlisten 100537 /var/tmp/bdevperf.sock 00:24:44.323 07:30:45 -- common/autotest_common.sh@819 -- # '[' -z 100537 ']' 00:24:44.323 07:30:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:44.323 07:30:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:44.323 07:30:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:44.323 07:30:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:44.323 07:30:45 -- common/autotest_common.sh@10 -- # set +x 00:24:44.323 [2024-11-04 07:30:46.021740] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:24:44.323 [2024-11-04 07:30:46.021847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100537 ] 00:24:44.323 [2024-11-04 07:30:46.161659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.581 [2024-11-04 07:30:46.216757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:45.148 07:30:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:45.148 07:30:46 -- common/autotest_common.sh@852 -- # return 0 00:24:45.148 07:30:46 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 100537 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:24:45.148 07:30:46 -- host/timeout.sh@116 -- # dtrace_pid=100565 00:24:45.148 07:30:46 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:24:45.406 07:30:47 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:45.664 NVMe0n1 00:24:45.923 07:30:47 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:45.923 07:30:47 -- host/timeout.sh@124 -- # rpc_pid=100619 00:24:45.923 07:30:47 -- host/timeout.sh@125 -- # sleep 1 00:24:45.923 Running I/O for 10 seconds... 00:24:46.857 07:30:48 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:47.120 [2024-11-04 07:30:48.761241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761331] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761365] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761380] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761388] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761395] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761403] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761417] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761424] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761431] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761438] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761445] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761451] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761458] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761465] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761472] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761478] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761499] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761506] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761513] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761528] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761536] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761544] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761552] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761559] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761566] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761573] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761587] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761594] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761617] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761647] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761655] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761663] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.120 [2024-11-04 07:30:48.761677] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.121 [2024-11-04 07:30:48.761684] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.121 [2024-11-04 07:30:48.761691] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.121 [2024-11-04 07:30:48.761699] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.121 [2024-11-04 07:30:48.761708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.121 [2024-11-04 07:30:48.761716] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.121 [2024-11-04 07:30:48.761723] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.121 [2024-11-04 07:30:48.761730] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.121 [2024-11-04 07:30:48.761739] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.121 [2024-11-04 07:30:48.761746] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.121 [2024-11-04 07:30:48.761754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.121 [2024-11-04 07:30:48.761762] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.121 [2024-11-04 07:30:48.761770] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.121 [2024-11-04 07:30:48.761778] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.121 [2024-11-04 07:30:48.761785] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.121 [2024-11-04 07:30:48.761793] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.121 [2024-11-04 07:30:48.761800] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.121 [2024-11-04 07:30:48.761809] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.121 [2024-11-04 07:30:48.761816] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.121 [2024-11-04 07:30:48.761824] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ddba0 is same with the state(5) to be set 00:24:47.121 [2024-11-04 07:30:48.762151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:117904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.121 [2024-11-04 07:30:48.762190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.121 [2024-11-04 07:30:48.762208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.121 [2024-11-04 07:30:48.762217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.121 [2024-11-04 07:30:48.762227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:88752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.121 [2024-11-04 07:30:48.762235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.121 [2024-11-04 07:30:48.762244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:47248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.121 [2024-11-04 07:30:48.762257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.121 [2024-11-04 07:30:48.762267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:68440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.121 [2024-11-04 07:30:48.762282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.121 [2024-11-04 07:30:48.762291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.121 [2024-11-04 07:30:48.762299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.121 [2024-11-04 07:30:48.762307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:47440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.121 [2024-11-04 07:30:48.762315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.121 [2024-11-04 07:30:48.762324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:34136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.121 [2024-11-04 07:30:48.762332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.121 [2024-11-04 07:30:48.762341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:55688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.121 [2024-11-04 07:30:48.762349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.121 [2024-11-04 07:30:48.762359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:90480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.121 [2024-11-04 07:30:48.762366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.121 [2024-11-04 07:30:48.762376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.121 [2024-11-04 07:30:48.762383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.121 [2024-11-04 07:30:48.762392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.121 [2024-11-04 07:30:48.762411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.121 [2024-11-04 07:30:48.762420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.121 [2024-11-04 07:30:48.762427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.121 [2024-11-04 07:30:48.762436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:130496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.121 [2024-11-04 07:30:48.762443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.121 [2024-11-04 07:30:48.762453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:116400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.121 [2024-11-04 07:30:48.762461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.121 [2024-11-04 07:30:48.762470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.121 [2024-11-04 07:30:48.762477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.121 [2024-11-04 07:30:48.762486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.121 [2024-11-04 07:30:48.762495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.121 [2024-11-04 07:30:48.762504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:103432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.121 [2024-11-04 07:30:48.762514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.121 [2024-11-04 07:30:48.762523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:48904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.121 [2024-11-04 07:30:48.762531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.121 [2024-11-04 07:30:48.762540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:42456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.121 [2024-11-04 07:30:48.762548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.121 [2024-11-04 07:30:48.762557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.121 [2024-11-04 07:30:48.762565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.121 [2024-11-04 07:30:48.762574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.121 [2024-11-04 07:30:48.762581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.121 [2024-11-04 07:30:48.762591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:40136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.121 [2024-11-04 07:30:48.762599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.121 [2024-11-04 07:30:48.762608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:85296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.121 [2024-11-04 07:30:48.762643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.121 [2024-11-04 07:30:48.762654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:91040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.121 [2024-11-04 07:30:48.762661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.121 [2024-11-04 07:30:48.762672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:128192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.121 [2024-11-04 07:30:48.762679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.121 [2024-11-04 07:30:48.762689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:59760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.121 [2024-11-04 07:30:48.762696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.121 [2024-11-04 07:30:48.762706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.121 [2024-11-04 07:30:48.762714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.121 [2024-11-04 07:30:48.762724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:90840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.121 [2024-11-04 07:30:48.762733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.121 [2024-11-04 07:30:48.762742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:52504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.121 [2024-11-04 07:30:48.762751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.121 [2024-11-04 07:30:48.762761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:88624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.762769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.762779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:32048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.762786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.762796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:107000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.762804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.762814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.762822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.762832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.762840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.762850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.762858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.762867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:36176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.762887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.762901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.762910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.762920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:46480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.762928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.762951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.762959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.762968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.762975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.762985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:55072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.762993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.763002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:46056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.763009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.763018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.763026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.763034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:35704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.763042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.763051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:52928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.763058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.763067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:81296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.763075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.763083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:73096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.763095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.763104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:47208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.763112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.763121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.763129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.763138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:94496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.763145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.763154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.763162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.763170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:124400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.763177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.763187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:74368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.763202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.763212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.763219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.763228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.763235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.763244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:41408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.763252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.763261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.763268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.763277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.763284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.763294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.763301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.763310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.763317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.763326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:122808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.763334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.763342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:91592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.763349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.763358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:93136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.763371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.763380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:56672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.763387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.763396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:118600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.763404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.763413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:59608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.763421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.763430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.763437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.763446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:92744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.763454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.763466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:80376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.763475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.122 [2024-11-04 07:30:48.763484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.122 [2024-11-04 07:30:48.763492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.763501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.123 [2024-11-04 07:30:48.763508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.763519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:106456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.123 [2024-11-04 07:30:48.763526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.763535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.123 [2024-11-04 07:30:48.763543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.763560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:37016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.123 [2024-11-04 07:30:48.763567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.763577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.123 [2024-11-04 07:30:48.763584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.763593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.123 [2024-11-04 07:30:48.763600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.763608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:130288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.123 [2024-11-04 07:30:48.763616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.763625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:124896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.123 [2024-11-04 07:30:48.763634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.763643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:79112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.123 [2024-11-04 07:30:48.763656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.763666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:112080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.123 [2024-11-04 07:30:48.763674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.763683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.123 [2024-11-04 07:30:48.763691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.763700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:105400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.123 [2024-11-04 07:30:48.763707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.763716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:36000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.123 [2024-11-04 07:30:48.763723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.763732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.123 [2024-11-04 07:30:48.763740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.763749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.123 [2024-11-04 07:30:48.763756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.763764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.123 [2024-11-04 07:30:48.763772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.763781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.123 [2024-11-04 07:30:48.763789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.763798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:91104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.123 [2024-11-04 07:30:48.763805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.763814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.123 [2024-11-04 07:30:48.763821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.763831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.123 [2024-11-04 07:30:48.763838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.763847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.123 [2024-11-04 07:30:48.763854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.763863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:53432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.123 [2024-11-04 07:30:48.763870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.763879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.123 [2024-11-04 07:30:48.763904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.763916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:55504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.123 [2024-11-04 07:30:48.763925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.763934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:34344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.123 [2024-11-04 07:30:48.763949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.763958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.123 [2024-11-04 07:30:48.763966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.763975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:57752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.123 [2024-11-04 07:30:48.763983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.763992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:114352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.123 [2024-11-04 07:30:48.763999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.764008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.123 [2024-11-04 07:30:48.764015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.764025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:100912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.123 [2024-11-04 07:30:48.764032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.764049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:46680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.123 [2024-11-04 07:30:48.764056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.764065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:53880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.123 [2024-11-04 07:30:48.764073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.764082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:50376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.123 [2024-11-04 07:30:48.764089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.764098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:127728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.123 [2024-11-04 07:30:48.764105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.764114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:72456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.123 [2024-11-04 07:30:48.764122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.764130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:115656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.123 [2024-11-04 07:30:48.764138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.764147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:125984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.123 [2024-11-04 07:30:48.764155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.764164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:89168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.123 [2024-11-04 07:30:48.764171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.764180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:94304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.123 [2024-11-04 07:30:48.764188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.123 [2024-11-04 07:30:48.764197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:126376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.124 [2024-11-04 07:30:48.764205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.124 [2024-11-04 07:30:48.764213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:101432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.124 [2024-11-04 07:30:48.764226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.124 [2024-11-04 07:30:48.764235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:120096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.124 [2024-11-04 07:30:48.764242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.124 [2024-11-04 07:30:48.764251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.124 [2024-11-04 07:30:48.764258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.124 [2024-11-04 07:30:48.764267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.124 [2024-11-04 07:30:48.764274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.124 [2024-11-04 07:30:48.764283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:67360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.124 [2024-11-04 07:30:48.764290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.124 [2024-11-04 07:30:48.764300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:36800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.124 [2024-11-04 07:30:48.764307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.124 [2024-11-04 07:30:48.764316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.124 [2024-11-04 07:30:48.764323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.124 [2024-11-04 07:30:48.764332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:45336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.124 [2024-11-04 07:30:48.764339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.124 [2024-11-04 07:30:48.764349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.124 [2024-11-04 07:30:48.764357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.124 [2024-11-04 07:30:48.764375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.124 [2024-11-04 07:30:48.764391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.124 [2024-11-04 07:30:48.764400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.124 [2024-11-04 07:30:48.764408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.124 [2024-11-04 07:30:48.764422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.124 [2024-11-04 07:30:48.764429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.124 [2024-11-04 07:30:48.764438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.124 [2024-11-04 07:30:48.764445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.124 [2024-11-04 07:30:48.764454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.124 [2024-11-04 07:30:48.764461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.124 [2024-11-04 07:30:48.764480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:57480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.124 [2024-11-04 07:30:48.764488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.124 [2024-11-04 07:30:48.764497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.124 [2024-11-04 07:30:48.764504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.124 [2024-11-04 07:30:48.764512] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f780 is same with the state(5) to be set 00:24:47.124 [2024-11-04 07:30:48.764529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.124 [2024-11-04 07:30:48.764536] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.124 [2024-11-04 07:30:48.764543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13368 len:8 PRP1 0x0 PRP2 0x0 00:24:47.124 [2024-11-04 07:30:48.764551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.124 [2024-11-04 07:30:48.764597] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c9f780 was disconnected and freed. reset controller. 00:24:47.124 [2024-11-04 07:30:48.764842] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:47.124 [2024-11-04 07:30:48.764926] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c1a8c0 (9): Bad file descriptor 00:24:47.124 [2024-11-04 07:30:48.765013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.124 [2024-11-04 07:30:48.765064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.124 [2024-11-04 07:30:48.765079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1a8c0 with addr=10.0.0.2, port=4420 00:24:47.124 [2024-11-04 07:30:48.765088] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1a8c0 is same with the state(5) to be set 00:24:47.124 [2024-11-04 07:30:48.765103] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c1a8c0 (9): Bad file descriptor 00:24:47.124 [2024-11-04 07:30:48.765118] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:47.124 [2024-11-04 07:30:48.765126] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:47.124 [2024-11-04 07:30:48.765145] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:47.124 [2024-11-04 07:30:48.765162] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:47.124 [2024-11-04 07:30:48.765171] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:47.124 07:30:48 -- host/timeout.sh@128 -- # wait 100619 00:24:49.068 [2024-11-04 07:30:50.765330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.068 [2024-11-04 07:30:50.765430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.068 [2024-11-04 07:30:50.765457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1a8c0 with addr=10.0.0.2, port=4420 00:24:49.068 [2024-11-04 07:30:50.765472] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1a8c0 is same with the state(5) to be set 00:24:49.068 [2024-11-04 07:30:50.765498] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c1a8c0 (9): Bad file descriptor 00:24:49.068 [2024-11-04 07:30:50.765522] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.068 [2024-11-04 07:30:50.765531] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.068 [2024-11-04 07:30:50.765541] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.068 [2024-11-04 07:30:50.765565] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.068 [2024-11-04 07:30:50.765576] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.970 [2024-11-04 07:30:52.765651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.970 [2024-11-04 07:30:52.765725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.970 [2024-11-04 07:30:52.765741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1a8c0 with addr=10.0.0.2, port=4420 00:24:50.970 [2024-11-04 07:30:52.765751] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1a8c0 is same with the state(5) to be set 00:24:50.970 [2024-11-04 07:30:52.765767] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c1a8c0 (9): Bad file descriptor 00:24:50.970 [2024-11-04 07:30:52.765783] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.970 [2024-11-04 07:30:52.765795] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.970 [2024-11-04 07:30:52.765803] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.970 [2024-11-04 07:30:52.765820] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.970 [2024-11-04 07:30:52.765829] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.502 [2024-11-04 07:30:54.765869] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.502 [2024-11-04 07:30:54.765918] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.502 [2024-11-04 07:30:54.765928] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.502 [2024-11-04 07:30:54.765940] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:24:53.502 [2024-11-04 07:30:54.765957] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.069 00:24:54.069 Latency(us) 00:24:54.069 [2024-11-04T07:30:55.910Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.069 [2024-11-04T07:30:55.910Z] Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:24:54.069 NVMe0n1 : 8.14 3179.06 12.42 15.73 0.00 40014.70 1869.27 7015926.69 00:24:54.069 [2024-11-04T07:30:55.910Z] =================================================================================================================== 00:24:54.069 [2024-11-04T07:30:55.910Z] Total : 3179.06 12.42 15.73 0.00 40014.70 1869.27 7015926.69 00:24:54.069 0 00:24:54.069 07:30:55 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:54.069 Attaching 5 probes... 00:24:54.069 1369.327406: reset bdev controller NVMe0 00:24:54.069 1369.457295: reconnect bdev controller NVMe0 00:24:54.069 3369.680749: reconnect delay bdev controller NVMe0 00:24:54.069 3369.702160: reconnect bdev controller NVMe0 00:24:54.069 5370.084782: reconnect delay bdev controller NVMe0 00:24:54.069 5370.097327: reconnect bdev controller NVMe0 00:24:54.069 7370.335344: reconnect delay bdev controller NVMe0 00:24:54.069 7370.348158: reconnect bdev controller NVMe0 00:24:54.069 07:30:55 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:24:54.069 07:30:55 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:24:54.069 07:30:55 -- host/timeout.sh@136 -- # kill 100565 00:24:54.069 07:30:55 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:54.069 07:30:55 -- host/timeout.sh@139 -- # killprocess 100537 00:24:54.069 07:30:55 -- common/autotest_common.sh@926 -- # '[' -z 100537 ']' 00:24:54.069 07:30:55 -- common/autotest_common.sh@930 -- # kill -0 100537 00:24:54.069 07:30:55 -- common/autotest_common.sh@931 -- # uname 00:24:54.069 07:30:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:54.069 07:30:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 100537 00:24:54.069 07:30:55 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:24:54.069 07:30:55 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:24:54.069 killing process with pid 100537 00:24:54.069 07:30:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 100537' 00:24:54.069 Received shutdown signal, test time was about 8.204123 seconds 00:24:54.069 00:24:54.069 Latency(us) 00:24:54.069 [2024-11-04T07:30:55.910Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.069 [2024-11-04T07:30:55.910Z] =================================================================================================================== 00:24:54.069 [2024-11-04T07:30:55.910Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:54.069 07:30:55 -- common/autotest_common.sh@945 -- # kill 100537 00:24:54.069 07:30:55 -- common/autotest_common.sh@950 -- # wait 100537 00:24:54.328 07:30:56 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:54.586 07:30:56 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:24:54.586 07:30:56 -- host/timeout.sh@145 -- # nvmftestfini 00:24:54.586 07:30:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:54.586 07:30:56 -- nvmf/common.sh@116 -- # sync 00:24:54.845 07:30:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:54.845 07:30:56 -- nvmf/common.sh@119 -- # set +e 00:24:54.845 07:30:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:54.845 07:30:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:54.845 rmmod nvme_tcp 00:24:54.845 rmmod nvme_fabrics 00:24:54.845 rmmod nvme_keyring 00:24:54.845 07:30:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:54.845 07:30:56 -- nvmf/common.sh@123 -- # set -e 00:24:54.845 07:30:56 -- nvmf/common.sh@124 -- # return 0 00:24:54.845 07:30:56 -- nvmf/common.sh@477 -- # '[' -n 99953 ']' 00:24:54.845 07:30:56 -- nvmf/common.sh@478 -- # killprocess 99953 00:24:54.845 07:30:56 -- common/autotest_common.sh@926 -- # '[' -z 99953 ']' 00:24:54.845 07:30:56 -- common/autotest_common.sh@930 -- # kill -0 99953 00:24:54.845 07:30:56 -- common/autotest_common.sh@931 -- # uname 00:24:54.845 07:30:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:54.845 07:30:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 99953 00:24:54.845 07:30:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:54.845 07:30:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:54.845 killing process with pid 99953 00:24:54.845 07:30:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 99953' 00:24:54.845 07:30:56 -- common/autotest_common.sh@945 -- # kill 99953 00:24:54.845 07:30:56 -- common/autotest_common.sh@950 -- # wait 99953 00:24:55.104 07:30:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:55.104 07:30:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:55.104 07:30:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:55.104 07:30:56 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:55.104 07:30:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:55.104 07:30:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.104 07:30:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:55.104 07:30:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.104 07:30:56 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:55.104 00:24:55.104 real 0m47.012s 00:24:55.104 user 2m17.637s 00:24:55.104 sys 0m5.336s 00:24:55.104 07:30:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:55.104 07:30:56 -- common/autotest_common.sh@10 -- # set +x 00:24:55.104 ************************************ 00:24:55.104 END TEST nvmf_timeout 00:24:55.104 ************************************ 00:24:55.104 07:30:56 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:24:55.104 07:30:56 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:24:55.104 07:30:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:55.104 07:30:56 -- common/autotest_common.sh@10 -- # set +x 00:24:55.104 07:30:56 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:24:55.104 00:24:55.104 real 17m26.142s 00:24:55.104 user 55m35.423s 00:24:55.104 sys 3m43.644s 00:24:55.104 07:30:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:55.104 07:30:56 -- common/autotest_common.sh@10 -- # set +x 00:24:55.104 ************************************ 00:24:55.104 END TEST nvmf_tcp 00:24:55.104 ************************************ 00:24:55.104 07:30:56 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:24:55.104 07:30:56 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:55.104 07:30:56 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:55.104 07:30:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:55.104 07:30:56 -- common/autotest_common.sh@10 -- # set +x 00:24:55.104 ************************************ 00:24:55.104 START TEST spdkcli_nvmf_tcp 00:24:55.104 ************************************ 00:24:55.104 07:30:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:55.363 * Looking for test storage... 00:24:55.363 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:24:55.363 07:30:56 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:24:55.363 07:30:56 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:24:55.363 07:30:56 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:24:55.363 07:30:56 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:55.363 07:30:56 -- nvmf/common.sh@7 -- # uname -s 00:24:55.363 07:30:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:55.363 07:30:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:55.363 07:30:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:55.363 07:30:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:55.363 07:30:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:55.363 07:30:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:55.363 07:30:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:55.363 07:30:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:55.363 07:30:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:55.363 07:30:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:55.363 07:30:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:24:55.363 07:30:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:24:55.363 07:30:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:55.363 07:30:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:55.363 07:30:56 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:55.363 07:30:56 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:55.363 07:30:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:55.363 07:30:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:55.363 07:30:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:55.363 07:30:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.363 07:30:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.364 07:30:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.364 07:30:56 -- paths/export.sh@5 -- # export PATH 00:24:55.364 07:30:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.364 07:30:56 -- nvmf/common.sh@46 -- # : 0 00:24:55.364 07:30:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:55.364 07:30:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:55.364 07:30:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:55.364 07:30:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:55.364 07:30:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:55.364 07:30:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:55.364 07:30:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:55.364 07:30:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:55.364 07:30:56 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:24:55.364 07:30:56 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:24:55.364 07:30:56 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:24:55.364 07:30:56 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:24:55.364 07:30:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:55.364 07:30:56 -- common/autotest_common.sh@10 -- # set +x 00:24:55.364 07:30:56 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:24:55.364 07:30:56 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=100836 00:24:55.364 07:30:56 -- spdkcli/common.sh@34 -- # waitforlisten 100836 00:24:55.364 07:30:56 -- common/autotest_common.sh@819 -- # '[' -z 100836 ']' 00:24:55.364 07:30:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.364 07:30:56 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:24:55.364 07:30:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:55.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.364 07:30:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.364 07:30:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:55.364 07:30:56 -- common/autotest_common.sh@10 -- # set +x 00:24:55.364 [2024-11-04 07:30:57.052565] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:24:55.364 [2024-11-04 07:30:57.052674] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100836 ] 00:24:55.364 [2024-11-04 07:30:57.186983] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:55.623 [2024-11-04 07:30:57.247837] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:55.623 [2024-11-04 07:30:57.248091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.623 [2024-11-04 07:30:57.248095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.559 07:30:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:56.559 07:30:58 -- common/autotest_common.sh@852 -- # return 0 00:24:56.559 07:30:58 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:24:56.559 07:30:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:56.559 07:30:58 -- common/autotest_common.sh@10 -- # set +x 00:24:56.559 07:30:58 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:24:56.559 07:30:58 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:24:56.559 07:30:58 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:24:56.559 07:30:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:56.559 07:30:58 -- common/autotest_common.sh@10 -- # set +x 00:24:56.559 07:30:58 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:24:56.559 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:24:56.559 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:24:56.559 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:24:56.559 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:24:56.559 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:24:56.559 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:24:56.559 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:56.559 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:24:56.559 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:24:56.559 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:56.559 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:56.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:24:56.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:56.560 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:56.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:24:56.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:56.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:56.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:56.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:56.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:24:56.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:24:56.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:56.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:24:56.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:56.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:24:56.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:24:56.560 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:24:56.560 ' 00:24:56.818 [2024-11-04 07:30:58.513185] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:59.351 [2024-11-04 07:31:00.790635] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:00.286 [2024-11-04 07:31:02.080407] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:02.818 [2024-11-04 07:31:04.471316] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:04.721 [2024-11-04 07:31:06.529970] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:06.625 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:06.625 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:06.625 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:06.625 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:06.625 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:06.625 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:06.625 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:06.625 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:06.625 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:06.625 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:06.625 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:06.625 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:06.625 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:06.625 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:06.625 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:06.625 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:06.625 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:06.625 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:06.625 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:06.625 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:06.625 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:06.625 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:06.625 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:06.625 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:06.625 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:06.625 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:06.625 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:06.625 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:06.625 07:31:08 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:06.625 07:31:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:06.625 07:31:08 -- common/autotest_common.sh@10 -- # set +x 00:25:06.625 07:31:08 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:06.625 07:31:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:06.625 07:31:08 -- common/autotest_common.sh@10 -- # set +x 00:25:06.625 07:31:08 -- spdkcli/nvmf.sh@69 -- # check_match 00:25:06.625 07:31:08 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:25:06.883 07:31:08 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:07.152 07:31:08 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:07.152 07:31:08 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:07.152 07:31:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:07.152 07:31:08 -- common/autotest_common.sh@10 -- # set +x 00:25:07.152 07:31:08 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:07.152 07:31:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:07.152 07:31:08 -- common/autotest_common.sh@10 -- # set +x 00:25:07.152 07:31:08 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:07.152 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:07.152 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:07.152 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:07.152 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:07.152 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:07.152 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:07.152 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:07.152 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:07.152 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:07.152 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:07.152 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:07.152 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:07.152 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:07.152 ' 00:25:12.470 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:12.470 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:12.470 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:12.470 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:12.470 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:12.470 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:12.470 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:12.470 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:12.470 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:12.470 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:12.470 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:12.470 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:12.470 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:12.470 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:12.729 07:31:14 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:12.729 07:31:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:12.729 07:31:14 -- common/autotest_common.sh@10 -- # set +x 00:25:12.729 07:31:14 -- spdkcli/nvmf.sh@90 -- # killprocess 100836 00:25:12.729 07:31:14 -- common/autotest_common.sh@926 -- # '[' -z 100836 ']' 00:25:12.729 07:31:14 -- common/autotest_common.sh@930 -- # kill -0 100836 00:25:12.729 07:31:14 -- common/autotest_common.sh@931 -- # uname 00:25:12.729 07:31:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:12.729 07:31:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 100836 00:25:12.729 killing process with pid 100836 00:25:12.729 07:31:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:12.729 07:31:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:12.729 07:31:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 100836' 00:25:12.729 07:31:14 -- common/autotest_common.sh@945 -- # kill 100836 00:25:12.729 [2024-11-04 07:31:14.409018] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:12.729 07:31:14 -- common/autotest_common.sh@950 -- # wait 100836 00:25:12.988 07:31:14 -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:12.988 07:31:14 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:12.988 Process with pid 100836 is not found 00:25:12.988 07:31:14 -- spdkcli/common.sh@13 -- # '[' -n 100836 ']' 00:25:12.988 07:31:14 -- spdkcli/common.sh@14 -- # killprocess 100836 00:25:12.988 07:31:14 -- common/autotest_common.sh@926 -- # '[' -z 100836 ']' 00:25:12.988 07:31:14 -- common/autotest_common.sh@930 -- # kill -0 100836 00:25:12.988 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (100836) - No such process 00:25:12.988 07:31:14 -- common/autotest_common.sh@953 -- # echo 'Process with pid 100836 is not found' 00:25:12.988 07:31:14 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:12.988 07:31:14 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:12.988 07:31:14 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:12.988 ************************************ 00:25:12.988 END TEST spdkcli_nvmf_tcp 00:25:12.988 ************************************ 00:25:12.988 00:25:12.988 real 0m17.772s 00:25:12.988 user 0m38.663s 00:25:12.988 sys 0m0.878s 00:25:12.988 07:31:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:12.988 07:31:14 -- common/autotest_common.sh@10 -- # set +x 00:25:12.988 07:31:14 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:12.988 07:31:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:12.988 07:31:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:12.988 07:31:14 -- common/autotest_common.sh@10 -- # set +x 00:25:12.988 ************************************ 00:25:12.988 START TEST nvmf_identify_passthru 00:25:12.988 ************************************ 00:25:12.988 07:31:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:12.988 * Looking for test storage... 00:25:12.988 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:12.988 07:31:14 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:12.988 07:31:14 -- nvmf/common.sh@7 -- # uname -s 00:25:12.988 07:31:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:12.988 07:31:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:12.988 07:31:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:12.988 07:31:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:12.988 07:31:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:12.988 07:31:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:12.988 07:31:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:12.988 07:31:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:12.988 07:31:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:12.988 07:31:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:12.988 07:31:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:25:12.988 07:31:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:25:12.988 07:31:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:12.988 07:31:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:12.988 07:31:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:12.988 07:31:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:12.988 07:31:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:12.988 07:31:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:12.988 07:31:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:12.988 07:31:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.988 07:31:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.988 07:31:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.988 07:31:14 -- paths/export.sh@5 -- # export PATH 00:25:12.988 07:31:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.988 07:31:14 -- nvmf/common.sh@46 -- # : 0 00:25:12.988 07:31:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:12.988 07:31:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:12.988 07:31:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:12.988 07:31:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:12.988 07:31:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:12.988 07:31:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:12.988 07:31:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:12.988 07:31:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:12.988 07:31:14 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:12.988 07:31:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:12.988 07:31:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:12.988 07:31:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:12.988 07:31:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.988 07:31:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.989 07:31:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.989 07:31:14 -- paths/export.sh@5 -- # export PATH 00:25:12.989 07:31:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.989 07:31:14 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:12.989 07:31:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:12.989 07:31:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:12.989 07:31:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:12.989 07:31:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:12.989 07:31:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:12.989 07:31:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.989 07:31:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:12.989 07:31:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.247 07:31:14 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:13.247 07:31:14 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:13.247 07:31:14 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:13.247 07:31:14 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:13.247 07:31:14 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:13.247 07:31:14 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:13.247 07:31:14 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:13.247 07:31:14 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:13.247 07:31:14 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:13.247 07:31:14 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:13.247 07:31:14 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:13.247 07:31:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:13.247 07:31:14 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:13.247 07:31:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:13.247 07:31:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:13.247 07:31:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:13.248 07:31:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:13.248 07:31:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:13.248 07:31:14 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:13.248 07:31:14 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:13.248 Cannot find device "nvmf_tgt_br" 00:25:13.248 07:31:14 -- nvmf/common.sh@154 -- # true 00:25:13.248 07:31:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:13.248 Cannot find device "nvmf_tgt_br2" 00:25:13.248 07:31:14 -- nvmf/common.sh@155 -- # true 00:25:13.248 07:31:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:13.248 07:31:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:13.248 Cannot find device "nvmf_tgt_br" 00:25:13.248 07:31:14 -- nvmf/common.sh@157 -- # true 00:25:13.248 07:31:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:13.248 Cannot find device "nvmf_tgt_br2" 00:25:13.248 07:31:14 -- nvmf/common.sh@158 -- # true 00:25:13.248 07:31:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:13.248 07:31:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:13.248 07:31:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:13.248 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:13.248 07:31:14 -- nvmf/common.sh@161 -- # true 00:25:13.248 07:31:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:13.248 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:13.248 07:31:14 -- nvmf/common.sh@162 -- # true 00:25:13.248 07:31:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:13.248 07:31:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:13.248 07:31:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:13.248 07:31:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:13.248 07:31:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:13.248 07:31:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:13.248 07:31:15 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:13.248 07:31:15 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:13.248 07:31:15 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:13.248 07:31:15 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:13.248 07:31:15 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:13.248 07:31:15 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:13.248 07:31:15 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:13.248 07:31:15 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:13.248 07:31:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:13.248 07:31:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:13.248 07:31:15 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:13.248 07:31:15 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:13.248 07:31:15 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:13.248 07:31:15 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:13.507 07:31:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:13.507 07:31:15 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:13.507 07:31:15 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:13.507 07:31:15 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:13.507 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:13.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:25:13.507 00:25:13.507 --- 10.0.0.2 ping statistics --- 00:25:13.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.507 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:25:13.507 07:31:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:13.507 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:13.507 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:25:13.507 00:25:13.507 --- 10.0.0.3 ping statistics --- 00:25:13.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.507 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:25:13.507 07:31:15 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:13.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:13.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:25:13.507 00:25:13.507 --- 10.0.0.1 ping statistics --- 00:25:13.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.507 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:25:13.507 07:31:15 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:13.507 07:31:15 -- nvmf/common.sh@421 -- # return 0 00:25:13.507 07:31:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:13.507 07:31:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:13.507 07:31:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:13.507 07:31:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:13.507 07:31:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:13.507 07:31:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:13.507 07:31:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:13.507 07:31:15 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:13.507 07:31:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:13.507 07:31:15 -- common/autotest_common.sh@10 -- # set +x 00:25:13.507 07:31:15 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:13.507 07:31:15 -- common/autotest_common.sh@1509 -- # bdfs=() 00:25:13.507 07:31:15 -- common/autotest_common.sh@1509 -- # local bdfs 00:25:13.507 07:31:15 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:25:13.507 07:31:15 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:25:13.507 07:31:15 -- common/autotest_common.sh@1498 -- # bdfs=() 00:25:13.507 07:31:15 -- common/autotest_common.sh@1498 -- # local bdfs 00:25:13.507 07:31:15 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:13.507 07:31:15 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:13.507 07:31:15 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:25:13.507 07:31:15 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:25:13.507 07:31:15 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:25:13.507 07:31:15 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:25:13.507 07:31:15 -- target/identify_passthru.sh@16 -- # bdf=0000:00:06.0 00:25:13.507 07:31:15 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:06.0 ']' 00:25:13.507 07:31:15 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:13.507 07:31:15 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:13.507 07:31:15 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:13.766 07:31:15 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:25:13.766 07:31:15 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:13.766 07:31:15 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:13.766 07:31:15 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:13.766 07:31:15 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:25:13.766 07:31:15 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:13.766 07:31:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:13.766 07:31:15 -- common/autotest_common.sh@10 -- # set +x 00:25:13.766 07:31:15 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:13.766 07:31:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:13.766 07:31:15 -- common/autotest_common.sh@10 -- # set +x 00:25:14.025 07:31:15 -- target/identify_passthru.sh@31 -- # nvmfpid=101332 00:25:14.025 07:31:15 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:14.025 07:31:15 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:14.025 07:31:15 -- target/identify_passthru.sh@35 -- # waitforlisten 101332 00:25:14.025 07:31:15 -- common/autotest_common.sh@819 -- # '[' -z 101332 ']' 00:25:14.025 07:31:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.025 07:31:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:14.025 07:31:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.025 07:31:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:14.025 07:31:15 -- common/autotest_common.sh@10 -- # set +x 00:25:14.025 [2024-11-04 07:31:15.662818] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:25:14.025 [2024-11-04 07:31:15.663130] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:14.025 [2024-11-04 07:31:15.802197] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:14.284 [2024-11-04 07:31:15.884014] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:14.284 [2024-11-04 07:31:15.884162] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:14.284 [2024-11-04 07:31:15.884175] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:14.284 [2024-11-04 07:31:15.884183] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:14.284 [2024-11-04 07:31:15.884243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.284 [2024-11-04 07:31:15.884532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:14.284 [2024-11-04 07:31:15.884667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:14.284 [2024-11-04 07:31:15.884676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.284 07:31:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:14.284 07:31:15 -- common/autotest_common.sh@852 -- # return 0 00:25:14.284 07:31:15 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:14.284 07:31:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.284 07:31:15 -- common/autotest_common.sh@10 -- # set +x 00:25:14.284 07:31:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:14.284 07:31:15 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:14.284 07:31:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.284 07:31:15 -- common/autotest_common.sh@10 -- # set +x 00:25:14.284 [2024-11-04 07:31:16.029659] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:14.284 07:31:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:14.284 07:31:16 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:14.284 07:31:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.284 07:31:16 -- common/autotest_common.sh@10 -- # set +x 00:25:14.284 [2024-11-04 07:31:16.043741] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:14.285 07:31:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:14.285 07:31:16 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:14.285 07:31:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:14.285 07:31:16 -- common/autotest_common.sh@10 -- # set +x 00:25:14.285 07:31:16 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:25:14.285 07:31:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.285 07:31:16 -- common/autotest_common.sh@10 -- # set +x 00:25:14.544 Nvme0n1 00:25:14.544 07:31:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:14.544 07:31:16 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:14.544 07:31:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.544 07:31:16 -- common/autotest_common.sh@10 -- # set +x 00:25:14.544 07:31:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:14.544 07:31:16 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:14.544 07:31:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.544 07:31:16 -- common/autotest_common.sh@10 -- # set +x 00:25:14.544 07:31:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:14.544 07:31:16 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:14.544 07:31:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.544 07:31:16 -- common/autotest_common.sh@10 -- # set +x 00:25:14.544 [2024-11-04 07:31:16.180464] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:14.544 07:31:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:14.544 07:31:16 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:14.544 07:31:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.544 07:31:16 -- common/autotest_common.sh@10 -- # set +x 00:25:14.544 [2024-11-04 07:31:16.188240] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:14.544 [ 00:25:14.544 { 00:25:14.544 "allow_any_host": true, 00:25:14.544 "hosts": [], 00:25:14.544 "listen_addresses": [], 00:25:14.544 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:14.544 "subtype": "Discovery" 00:25:14.544 }, 00:25:14.544 { 00:25:14.544 "allow_any_host": true, 00:25:14.544 "hosts": [], 00:25:14.544 "listen_addresses": [ 00:25:14.544 { 00:25:14.544 "adrfam": "IPv4", 00:25:14.544 "traddr": "10.0.0.2", 00:25:14.544 "transport": "TCP", 00:25:14.544 "trsvcid": "4420", 00:25:14.544 "trtype": "TCP" 00:25:14.544 } 00:25:14.544 ], 00:25:14.544 "max_cntlid": 65519, 00:25:14.544 "max_namespaces": 1, 00:25:14.544 "min_cntlid": 1, 00:25:14.544 "model_number": "SPDK bdev Controller", 00:25:14.544 "namespaces": [ 00:25:14.544 { 00:25:14.544 "bdev_name": "Nvme0n1", 00:25:14.544 "name": "Nvme0n1", 00:25:14.544 "nguid": "7A1B677996F548A7A07A3E7FE8539FC2", 00:25:14.544 "nsid": 1, 00:25:14.544 "uuid": "7a1b6779-96f5-48a7-a07a-3e7fe8539fc2" 00:25:14.544 } 00:25:14.544 ], 00:25:14.544 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:14.544 "serial_number": "SPDK00000000000001", 00:25:14.544 "subtype": "NVMe" 00:25:14.544 } 00:25:14.544 ] 00:25:14.544 07:31:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:14.544 07:31:16 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:14.544 07:31:16 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:14.544 07:31:16 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:14.908 07:31:16 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:25:14.908 07:31:16 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:14.908 07:31:16 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:14.908 07:31:16 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:14.908 07:31:16 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:25:14.908 07:31:16 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:25:14.908 07:31:16 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:25:14.908 07:31:16 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:14.908 07:31:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.908 07:31:16 -- common/autotest_common.sh@10 -- # set +x 00:25:14.908 07:31:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:14.908 07:31:16 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:14.908 07:31:16 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:14.908 07:31:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:14.908 07:31:16 -- nvmf/common.sh@116 -- # sync 00:25:15.167 07:31:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:15.167 07:31:16 -- nvmf/common.sh@119 -- # set +e 00:25:15.167 07:31:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:15.167 07:31:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:15.167 rmmod nvme_tcp 00:25:15.167 rmmod nvme_fabrics 00:25:15.167 rmmod nvme_keyring 00:25:15.167 07:31:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:15.167 07:31:16 -- nvmf/common.sh@123 -- # set -e 00:25:15.167 07:31:16 -- nvmf/common.sh@124 -- # return 0 00:25:15.167 07:31:16 -- nvmf/common.sh@477 -- # '[' -n 101332 ']' 00:25:15.167 07:31:16 -- nvmf/common.sh@478 -- # killprocess 101332 00:25:15.167 07:31:16 -- common/autotest_common.sh@926 -- # '[' -z 101332 ']' 00:25:15.167 07:31:16 -- common/autotest_common.sh@930 -- # kill -0 101332 00:25:15.167 07:31:16 -- common/autotest_common.sh@931 -- # uname 00:25:15.167 07:31:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:15.167 07:31:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 101332 00:25:15.167 killing process with pid 101332 00:25:15.167 07:31:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:15.167 07:31:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:15.167 07:31:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 101332' 00:25:15.167 07:31:16 -- common/autotest_common.sh@945 -- # kill 101332 00:25:15.167 [2024-11-04 07:31:16.780746] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:15.167 07:31:16 -- common/autotest_common.sh@950 -- # wait 101332 00:25:15.167 07:31:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:15.167 07:31:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:15.167 07:31:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:15.167 07:31:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:15.167 07:31:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:15.167 07:31:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.167 07:31:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:15.167 07:31:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.167 07:31:17 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:15.426 00:25:15.426 real 0m2.295s 00:25:15.426 user 0m4.647s 00:25:15.426 sys 0m0.757s 00:25:15.426 07:31:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:15.426 07:31:17 -- common/autotest_common.sh@10 -- # set +x 00:25:15.426 ************************************ 00:25:15.426 END TEST nvmf_identify_passthru 00:25:15.426 ************************************ 00:25:15.426 07:31:17 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:15.426 07:31:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:15.426 07:31:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:15.426 07:31:17 -- common/autotest_common.sh@10 -- # set +x 00:25:15.426 ************************************ 00:25:15.426 START TEST nvmf_dif 00:25:15.426 ************************************ 00:25:15.426 07:31:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:15.426 * Looking for test storage... 00:25:15.426 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:15.426 07:31:17 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:15.426 07:31:17 -- nvmf/common.sh@7 -- # uname -s 00:25:15.426 07:31:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:15.426 07:31:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:15.426 07:31:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:15.426 07:31:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:15.426 07:31:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:15.426 07:31:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:15.426 07:31:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:15.426 07:31:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:15.426 07:31:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:15.426 07:31:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:15.427 07:31:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:25:15.427 07:31:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:25:15.427 07:31:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:15.427 07:31:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:15.427 07:31:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:15.427 07:31:17 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:15.427 07:31:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:15.427 07:31:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:15.427 07:31:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:15.427 07:31:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.427 07:31:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.427 07:31:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.427 07:31:17 -- paths/export.sh@5 -- # export PATH 00:25:15.427 07:31:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.427 07:31:17 -- nvmf/common.sh@46 -- # : 0 00:25:15.427 07:31:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:15.427 07:31:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:15.427 07:31:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:15.427 07:31:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:15.427 07:31:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:15.427 07:31:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:15.427 07:31:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:15.427 07:31:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:15.427 07:31:17 -- target/dif.sh@15 -- # NULL_META=16 00:25:15.427 07:31:17 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:15.427 07:31:17 -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:15.427 07:31:17 -- target/dif.sh@15 -- # NULL_DIF=1 00:25:15.427 07:31:17 -- target/dif.sh@135 -- # nvmftestinit 00:25:15.427 07:31:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:15.427 07:31:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:15.427 07:31:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:15.427 07:31:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:15.427 07:31:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:15.427 07:31:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.427 07:31:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:15.427 07:31:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.427 07:31:17 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:15.427 07:31:17 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:15.427 07:31:17 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:15.427 07:31:17 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:15.427 07:31:17 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:15.427 07:31:17 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:15.427 07:31:17 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:15.427 07:31:17 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:15.427 07:31:17 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:15.427 07:31:17 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:15.427 07:31:17 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:15.427 07:31:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:15.427 07:31:17 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:15.427 07:31:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:15.427 07:31:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:15.427 07:31:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:15.427 07:31:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:15.427 07:31:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:15.427 07:31:17 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:15.427 07:31:17 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:15.427 Cannot find device "nvmf_tgt_br" 00:25:15.427 07:31:17 -- nvmf/common.sh@154 -- # true 00:25:15.427 07:31:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:15.427 Cannot find device "nvmf_tgt_br2" 00:25:15.427 07:31:17 -- nvmf/common.sh@155 -- # true 00:25:15.427 07:31:17 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:15.427 07:31:17 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:15.427 Cannot find device "nvmf_tgt_br" 00:25:15.427 07:31:17 -- nvmf/common.sh@157 -- # true 00:25:15.427 07:31:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:15.427 Cannot find device "nvmf_tgt_br2" 00:25:15.427 07:31:17 -- nvmf/common.sh@158 -- # true 00:25:15.427 07:31:17 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:15.427 07:31:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:15.686 07:31:17 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:15.686 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:15.686 07:31:17 -- nvmf/common.sh@161 -- # true 00:25:15.686 07:31:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:15.686 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:15.686 07:31:17 -- nvmf/common.sh@162 -- # true 00:25:15.686 07:31:17 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:15.686 07:31:17 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:15.686 07:31:17 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:15.686 07:31:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:15.686 07:31:17 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:15.686 07:31:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:15.686 07:31:17 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:15.686 07:31:17 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:15.686 07:31:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:15.686 07:31:17 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:15.686 07:31:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:15.686 07:31:17 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:15.686 07:31:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:15.686 07:31:17 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:15.686 07:31:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:15.686 07:31:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:15.686 07:31:17 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:15.686 07:31:17 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:15.686 07:31:17 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:15.686 07:31:17 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:15.686 07:31:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:15.686 07:31:17 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:15.686 07:31:17 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:15.686 07:31:17 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:15.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:15.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:25:15.686 00:25:15.686 --- 10.0.0.2 ping statistics --- 00:25:15.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.686 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:25:15.686 07:31:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:15.686 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:15.686 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:25:15.686 00:25:15.686 --- 10.0.0.3 ping statistics --- 00:25:15.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.686 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:25:15.686 07:31:17 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:15.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:15.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:25:15.686 00:25:15.686 --- 10.0.0.1 ping statistics --- 00:25:15.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.686 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:25:15.686 07:31:17 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:15.686 07:31:17 -- nvmf/common.sh@421 -- # return 0 00:25:15.686 07:31:17 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:25:15.686 07:31:17 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:15.945 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:16.204 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:16.204 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:16.204 07:31:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:16.204 07:31:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:16.204 07:31:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:16.204 07:31:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:16.204 07:31:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:16.204 07:31:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:16.204 07:31:17 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:16.204 07:31:17 -- target/dif.sh@137 -- # nvmfappstart 00:25:16.204 07:31:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:16.204 07:31:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:16.204 07:31:17 -- common/autotest_common.sh@10 -- # set +x 00:25:16.204 07:31:17 -- nvmf/common.sh@469 -- # nvmfpid=101666 00:25:16.204 07:31:17 -- nvmf/common.sh@470 -- # waitforlisten 101666 00:25:16.204 07:31:17 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:16.204 07:31:17 -- common/autotest_common.sh@819 -- # '[' -z 101666 ']' 00:25:16.204 07:31:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:16.204 07:31:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:16.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:16.204 07:31:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:16.204 07:31:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:16.204 07:31:17 -- common/autotest_common.sh@10 -- # set +x 00:25:16.204 [2024-11-04 07:31:17.921052] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:25:16.204 [2024-11-04 07:31:17.921146] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:16.463 [2024-11-04 07:31:18.063569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.463 [2024-11-04 07:31:18.137906] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:16.463 [2024-11-04 07:31:18.138090] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:16.463 [2024-11-04 07:31:18.138106] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:16.463 [2024-11-04 07:31:18.138119] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:16.463 [2024-11-04 07:31:18.138150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.400 07:31:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:17.400 07:31:18 -- common/autotest_common.sh@852 -- # return 0 00:25:17.400 07:31:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:17.400 07:31:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:17.400 07:31:18 -- common/autotest_common.sh@10 -- # set +x 00:25:17.400 07:31:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:17.400 07:31:18 -- target/dif.sh@139 -- # create_transport 00:25:17.400 07:31:18 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:17.400 07:31:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:17.400 07:31:18 -- common/autotest_common.sh@10 -- # set +x 00:25:17.400 [2024-11-04 07:31:19.000401] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:17.400 07:31:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:17.400 07:31:19 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:17.400 07:31:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:17.400 07:31:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:17.400 07:31:19 -- common/autotest_common.sh@10 -- # set +x 00:25:17.400 ************************************ 00:25:17.400 START TEST fio_dif_1_default 00:25:17.400 ************************************ 00:25:17.400 07:31:19 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:25:17.400 07:31:19 -- target/dif.sh@86 -- # create_subsystems 0 00:25:17.400 07:31:19 -- target/dif.sh@28 -- # local sub 00:25:17.400 07:31:19 -- target/dif.sh@30 -- # for sub in "$@" 00:25:17.400 07:31:19 -- target/dif.sh@31 -- # create_subsystem 0 00:25:17.400 07:31:19 -- target/dif.sh@18 -- # local sub_id=0 00:25:17.400 07:31:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:17.400 07:31:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:17.400 07:31:19 -- common/autotest_common.sh@10 -- # set +x 00:25:17.400 bdev_null0 00:25:17.400 07:31:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:17.400 07:31:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:17.400 07:31:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:17.400 07:31:19 -- common/autotest_common.sh@10 -- # set +x 00:25:17.400 07:31:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:17.400 07:31:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:17.400 07:31:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:17.400 07:31:19 -- common/autotest_common.sh@10 -- # set +x 00:25:17.400 07:31:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:17.400 07:31:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:17.400 07:31:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:17.400 07:31:19 -- common/autotest_common.sh@10 -- # set +x 00:25:17.400 [2024-11-04 07:31:19.044488] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:17.400 07:31:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:17.400 07:31:19 -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:17.400 07:31:19 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:17.400 07:31:19 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:17.400 07:31:19 -- nvmf/common.sh@520 -- # config=() 00:25:17.400 07:31:19 -- nvmf/common.sh@520 -- # local subsystem config 00:25:17.400 07:31:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:17.400 07:31:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:17.400 { 00:25:17.400 "params": { 00:25:17.400 "name": "Nvme$subsystem", 00:25:17.400 "trtype": "$TEST_TRANSPORT", 00:25:17.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:17.400 "adrfam": "ipv4", 00:25:17.400 "trsvcid": "$NVMF_PORT", 00:25:17.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:17.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:17.400 "hdgst": ${hdgst:-false}, 00:25:17.400 "ddgst": ${ddgst:-false} 00:25:17.400 }, 00:25:17.400 "method": "bdev_nvme_attach_controller" 00:25:17.400 } 00:25:17.400 EOF 00:25:17.400 )") 00:25:17.400 07:31:19 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:17.400 07:31:19 -- target/dif.sh@82 -- # gen_fio_conf 00:25:17.400 07:31:19 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:17.400 07:31:19 -- target/dif.sh@54 -- # local file 00:25:17.400 07:31:19 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:25:17.400 07:31:19 -- target/dif.sh@56 -- # cat 00:25:17.400 07:31:19 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:17.400 07:31:19 -- common/autotest_common.sh@1318 -- # local sanitizers 00:25:17.400 07:31:19 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:17.400 07:31:19 -- common/autotest_common.sh@1320 -- # shift 00:25:17.400 07:31:19 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:25:17.400 07:31:19 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:17.400 07:31:19 -- nvmf/common.sh@542 -- # cat 00:25:17.400 07:31:19 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:17.400 07:31:19 -- common/autotest_common.sh@1324 -- # grep libasan 00:25:17.400 07:31:19 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:17.400 07:31:19 -- target/dif.sh@72 -- # (( file <= files )) 00:25:17.400 07:31:19 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:17.400 07:31:19 -- nvmf/common.sh@544 -- # jq . 00:25:17.400 07:31:19 -- nvmf/common.sh@545 -- # IFS=, 00:25:17.400 07:31:19 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:17.400 "params": { 00:25:17.400 "name": "Nvme0", 00:25:17.400 "trtype": "tcp", 00:25:17.400 "traddr": "10.0.0.2", 00:25:17.400 "adrfam": "ipv4", 00:25:17.400 "trsvcid": "4420", 00:25:17.400 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:17.400 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:17.400 "hdgst": false, 00:25:17.400 "ddgst": false 00:25:17.400 }, 00:25:17.400 "method": "bdev_nvme_attach_controller" 00:25:17.400 }' 00:25:17.400 07:31:19 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:17.400 07:31:19 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:17.400 07:31:19 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:17.400 07:31:19 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:17.400 07:31:19 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:25:17.400 07:31:19 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:17.400 07:31:19 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:17.400 07:31:19 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:17.400 07:31:19 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:17.400 07:31:19 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:17.659 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:17.660 fio-3.35 00:25:17.660 Starting 1 thread 00:25:17.918 [2024-11-04 07:31:19.676568] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:17.918 [2024-11-04 07:31:19.676640] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:30.134 00:25:30.134 filename0: (groupid=0, jobs=1): err= 0: pid=101750: Mon Nov 4 07:31:29 2024 00:25:30.134 read: IOPS=5096, BW=19.9MiB/s (20.9MB/s)(200MiB/10032msec) 00:25:30.134 slat (nsec): min=5743, max=60216, avg=7194.14, stdev=2793.05 00:25:30.134 clat (usec): min=344, max=42484, avg=763.23, stdev=3832.13 00:25:30.134 lat (usec): min=350, max=42494, avg=770.42, stdev=3832.20 00:25:30.134 clat percentiles (usec): 00:25:30.134 | 1.00th=[ 351], 5.00th=[ 355], 10.00th=[ 363], 20.00th=[ 371], 00:25:30.134 | 30.00th=[ 379], 40.00th=[ 388], 50.00th=[ 396], 60.00th=[ 400], 00:25:30.134 | 70.00th=[ 408], 80.00th=[ 420], 90.00th=[ 441], 95.00th=[ 465], 00:25:30.134 | 99.00th=[ 562], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:25:30.134 | 99.99th=[42206] 00:25:30.134 bw ( KiB/s): min= 4768, max=31872, per=100.00%, avg=20451.20, stdev=7061.91, samples=20 00:25:30.134 iops : min= 1192, max= 7968, avg=5112.80, stdev=1765.48, samples=20 00:25:30.134 lat (usec) : 500=97.73%, 750=1.36%, 1000=0.01% 00:25:30.134 lat (msec) : 10=0.01%, 50=0.90% 00:25:30.134 cpu : usr=87.82%, sys=10.05%, ctx=21, majf=0, minf=0 00:25:30.134 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:30.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.134 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.134 issued rwts: total=51132,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.134 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:30.134 00:25:30.134 Run status group 0 (all jobs): 00:25:30.134 READ: bw=19.9MiB/s (20.9MB/s), 19.9MiB/s-19.9MiB/s (20.9MB/s-20.9MB/s), io=200MiB (209MB), run=10032-10032msec 00:25:30.134 07:31:30 -- target/dif.sh@88 -- # destroy_subsystems 0 00:25:30.134 07:31:30 -- target/dif.sh@43 -- # local sub 00:25:30.134 07:31:30 -- target/dif.sh@45 -- # for sub in "$@" 00:25:30.134 07:31:30 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:30.134 07:31:30 -- target/dif.sh@36 -- # local sub_id=0 00:25:30.134 07:31:30 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:30.134 07:31:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.134 07:31:30 -- common/autotest_common.sh@10 -- # set +x 00:25:30.134 07:31:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.134 07:31:30 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:30.134 07:31:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.135 07:31:30 -- common/autotest_common.sh@10 -- # set +x 00:25:30.135 07:31:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.135 00:25:30.135 real 0m11.025s 00:25:30.135 user 0m9.446s 00:25:30.135 sys 0m1.277s 00:25:30.135 07:31:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:30.135 ************************************ 00:25:30.135 END TEST fio_dif_1_default 00:25:30.135 07:31:30 -- common/autotest_common.sh@10 -- # set +x 00:25:30.135 ************************************ 00:25:30.135 07:31:30 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:25:30.135 07:31:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:30.135 07:31:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:30.135 07:31:30 -- common/autotest_common.sh@10 -- # set +x 00:25:30.135 ************************************ 00:25:30.135 START TEST fio_dif_1_multi_subsystems 00:25:30.135 ************************************ 00:25:30.135 07:31:30 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:25:30.135 07:31:30 -- target/dif.sh@92 -- # local files=1 00:25:30.135 07:31:30 -- target/dif.sh@94 -- # create_subsystems 0 1 00:25:30.135 07:31:30 -- target/dif.sh@28 -- # local sub 00:25:30.135 07:31:30 -- target/dif.sh@30 -- # for sub in "$@" 00:25:30.135 07:31:30 -- target/dif.sh@31 -- # create_subsystem 0 00:25:30.135 07:31:30 -- target/dif.sh@18 -- # local sub_id=0 00:25:30.135 07:31:30 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:30.135 07:31:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.135 07:31:30 -- common/autotest_common.sh@10 -- # set +x 00:25:30.135 bdev_null0 00:25:30.135 07:31:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.135 07:31:30 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:30.135 07:31:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.135 07:31:30 -- common/autotest_common.sh@10 -- # set +x 00:25:30.135 07:31:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.135 07:31:30 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:30.135 07:31:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.135 07:31:30 -- common/autotest_common.sh@10 -- # set +x 00:25:30.135 07:31:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.135 07:31:30 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:30.135 07:31:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.135 07:31:30 -- common/autotest_common.sh@10 -- # set +x 00:25:30.135 [2024-11-04 07:31:30.136019] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:30.135 07:31:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.135 07:31:30 -- target/dif.sh@30 -- # for sub in "$@" 00:25:30.135 07:31:30 -- target/dif.sh@31 -- # create_subsystem 1 00:25:30.135 07:31:30 -- target/dif.sh@18 -- # local sub_id=1 00:25:30.135 07:31:30 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:30.135 07:31:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.135 07:31:30 -- common/autotest_common.sh@10 -- # set +x 00:25:30.135 bdev_null1 00:25:30.135 07:31:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.135 07:31:30 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:30.135 07:31:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.135 07:31:30 -- common/autotest_common.sh@10 -- # set +x 00:25:30.135 07:31:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.135 07:31:30 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:30.135 07:31:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.135 07:31:30 -- common/autotest_common.sh@10 -- # set +x 00:25:30.135 07:31:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.135 07:31:30 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:30.135 07:31:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.135 07:31:30 -- common/autotest_common.sh@10 -- # set +x 00:25:30.135 07:31:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.135 07:31:30 -- target/dif.sh@95 -- # fio /dev/fd/62 00:25:30.135 07:31:30 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:25:30.135 07:31:30 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:30.135 07:31:30 -- nvmf/common.sh@520 -- # config=() 00:25:30.135 07:31:30 -- nvmf/common.sh@520 -- # local subsystem config 00:25:30.135 07:31:30 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:30.135 07:31:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:30.135 07:31:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:30.135 { 00:25:30.135 "params": { 00:25:30.135 "name": "Nvme$subsystem", 00:25:30.135 "trtype": "$TEST_TRANSPORT", 00:25:30.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:30.135 "adrfam": "ipv4", 00:25:30.135 "trsvcid": "$NVMF_PORT", 00:25:30.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:30.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:30.135 "hdgst": ${hdgst:-false}, 00:25:30.135 "ddgst": ${ddgst:-false} 00:25:30.135 }, 00:25:30.135 "method": "bdev_nvme_attach_controller" 00:25:30.135 } 00:25:30.135 EOF 00:25:30.135 )") 00:25:30.135 07:31:30 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:30.135 07:31:30 -- target/dif.sh@82 -- # gen_fio_conf 00:25:30.135 07:31:30 -- target/dif.sh@54 -- # local file 00:25:30.135 07:31:30 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:25:30.135 07:31:30 -- target/dif.sh@56 -- # cat 00:25:30.135 07:31:30 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:30.135 07:31:30 -- common/autotest_common.sh@1318 -- # local sanitizers 00:25:30.135 07:31:30 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:30.135 07:31:30 -- common/autotest_common.sh@1320 -- # shift 00:25:30.135 07:31:30 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:25:30.135 07:31:30 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:30.135 07:31:30 -- nvmf/common.sh@542 -- # cat 00:25:30.135 07:31:30 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:30.135 07:31:30 -- common/autotest_common.sh@1324 -- # grep libasan 00:25:30.135 07:31:30 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:30.135 07:31:30 -- target/dif.sh@72 -- # (( file <= files )) 00:25:30.135 07:31:30 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:30.135 07:31:30 -- target/dif.sh@73 -- # cat 00:25:30.135 07:31:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:30.135 07:31:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:30.135 { 00:25:30.135 "params": { 00:25:30.135 "name": "Nvme$subsystem", 00:25:30.135 "trtype": "$TEST_TRANSPORT", 00:25:30.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:30.135 "adrfam": "ipv4", 00:25:30.135 "trsvcid": "$NVMF_PORT", 00:25:30.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:30.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:30.135 "hdgst": ${hdgst:-false}, 00:25:30.135 "ddgst": ${ddgst:-false} 00:25:30.135 }, 00:25:30.135 "method": "bdev_nvme_attach_controller" 00:25:30.135 } 00:25:30.135 EOF 00:25:30.135 )") 00:25:30.135 07:31:30 -- target/dif.sh@72 -- # (( file++ )) 00:25:30.135 07:31:30 -- target/dif.sh@72 -- # (( file <= files )) 00:25:30.135 07:31:30 -- nvmf/common.sh@542 -- # cat 00:25:30.135 07:31:30 -- nvmf/common.sh@544 -- # jq . 00:25:30.135 07:31:30 -- nvmf/common.sh@545 -- # IFS=, 00:25:30.135 07:31:30 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:30.135 "params": { 00:25:30.135 "name": "Nvme0", 00:25:30.135 "trtype": "tcp", 00:25:30.135 "traddr": "10.0.0.2", 00:25:30.135 "adrfam": "ipv4", 00:25:30.135 "trsvcid": "4420", 00:25:30.135 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:30.135 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:30.135 "hdgst": false, 00:25:30.135 "ddgst": false 00:25:30.135 }, 00:25:30.135 "method": "bdev_nvme_attach_controller" 00:25:30.135 },{ 00:25:30.135 "params": { 00:25:30.135 "name": "Nvme1", 00:25:30.135 "trtype": "tcp", 00:25:30.135 "traddr": "10.0.0.2", 00:25:30.135 "adrfam": "ipv4", 00:25:30.135 "trsvcid": "4420", 00:25:30.135 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:30.135 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:30.135 "hdgst": false, 00:25:30.135 "ddgst": false 00:25:30.135 }, 00:25:30.135 "method": "bdev_nvme_attach_controller" 00:25:30.135 }' 00:25:30.135 07:31:30 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:30.135 07:31:30 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:30.135 07:31:30 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:30.135 07:31:30 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:30.135 07:31:30 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:25:30.135 07:31:30 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:30.135 07:31:30 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:30.135 07:31:30 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:30.135 07:31:30 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:30.135 07:31:30 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:30.135 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:30.136 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:30.136 fio-3.35 00:25:30.136 Starting 2 threads 00:25:30.136 [2024-11-04 07:31:30.900957] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:30.136 [2024-11-04 07:31:30.901020] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:40.132 00:25:40.132 filename0: (groupid=0, jobs=1): err= 0: pid=101911: Mon Nov 4 07:31:41 2024 00:25:40.132 read: IOPS=989, BW=3956KiB/s (4051kB/s)(38.8MiB/10030msec) 00:25:40.132 slat (nsec): min=5809, max=57332, avg=7386.27, stdev=3025.46 00:25:40.132 clat (usec): min=370, max=41795, avg=4022.17, stdev=11505.54 00:25:40.132 lat (usec): min=377, max=41807, avg=4029.56, stdev=11505.60 00:25:40.132 clat percentiles (usec): 00:25:40.132 | 1.00th=[ 379], 5.00th=[ 383], 10.00th=[ 388], 20.00th=[ 396], 00:25:40.132 | 30.00th=[ 404], 40.00th=[ 408], 50.00th=[ 416], 60.00th=[ 424], 00:25:40.132 | 70.00th=[ 437], 80.00th=[ 482], 90.00th=[ 717], 95.00th=[41157], 00:25:40.132 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:25:40.132 | 99.99th=[41681] 00:25:40.132 bw ( KiB/s): min= 768, max= 6976, per=50.12%, avg=3966.00, stdev=1441.77, samples=20 00:25:40.132 iops : min= 192, max= 1744, avg=991.50, stdev=360.44, samples=20 00:25:40.132 lat (usec) : 500=80.51%, 750=10.13%, 1000=0.20% 00:25:40.132 lat (msec) : 2=0.32%, 50=8.83% 00:25:40.132 cpu : usr=95.02%, sys=4.32%, ctx=10, majf=0, minf=9 00:25:40.132 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:40.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.132 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.132 issued rwts: total=9920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.132 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:40.132 filename1: (groupid=0, jobs=1): err= 0: pid=101912: Mon Nov 4 07:31:41 2024 00:25:40.132 read: IOPS=990, BW=3960KiB/s (4055kB/s)(38.8MiB/10040msec) 00:25:40.132 slat (nsec): min=5793, max=34281, avg=7379.61, stdev=2987.92 00:25:40.132 clat (usec): min=370, max=42509, avg=4018.23, stdev=11499.33 00:25:40.132 lat (usec): min=376, max=42521, avg=4025.61, stdev=11499.34 00:25:40.132 clat percentiles (usec): 00:25:40.132 | 1.00th=[ 379], 5.00th=[ 383], 10.00th=[ 388], 20.00th=[ 396], 00:25:40.132 | 30.00th=[ 404], 40.00th=[ 408], 50.00th=[ 416], 60.00th=[ 424], 00:25:40.132 | 70.00th=[ 441], 80.00th=[ 510], 90.00th=[ 717], 95.00th=[41157], 00:25:40.132 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:25:40.132 | 99.99th=[42730] 00:25:40.132 bw ( KiB/s): min= 864, max= 4992, per=50.21%, avg=3973.90, stdev=920.76, samples=20 00:25:40.132 iops : min= 216, max= 1248, avg=993.45, stdev=230.18, samples=20 00:25:40.132 lat (usec) : 500=79.70%, 750=10.80%, 1000=0.32% 00:25:40.132 lat (msec) : 2=0.36%, 50=8.81% 00:25:40.132 cpu : usr=95.40%, sys=3.93%, ctx=21, majf=0, minf=0 00:25:40.132 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:40.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.132 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.132 issued rwts: total=9940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.132 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:40.132 00:25:40.132 Run status group 0 (all jobs): 00:25:40.132 READ: bw=7912KiB/s (8102kB/s), 3956KiB/s-3960KiB/s (4051kB/s-4055kB/s), io=77.6MiB (81.3MB), run=10030-10040msec 00:25:40.132 07:31:41 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:25:40.132 07:31:41 -- target/dif.sh@43 -- # local sub 00:25:40.132 07:31:41 -- target/dif.sh@45 -- # for sub in "$@" 00:25:40.132 07:31:41 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:40.132 07:31:41 -- target/dif.sh@36 -- # local sub_id=0 00:25:40.132 07:31:41 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:40.132 07:31:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:40.132 07:31:41 -- common/autotest_common.sh@10 -- # set +x 00:25:40.132 07:31:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:40.132 07:31:41 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:40.132 07:31:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:40.132 07:31:41 -- common/autotest_common.sh@10 -- # set +x 00:25:40.132 07:31:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:40.132 07:31:41 -- target/dif.sh@45 -- # for sub in "$@" 00:25:40.132 07:31:41 -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:40.132 07:31:41 -- target/dif.sh@36 -- # local sub_id=1 00:25:40.132 07:31:41 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:40.132 07:31:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:40.132 07:31:41 -- common/autotest_common.sh@10 -- # set +x 00:25:40.132 07:31:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:40.132 07:31:41 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:40.132 07:31:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:40.132 07:31:41 -- common/autotest_common.sh@10 -- # set +x 00:25:40.132 07:31:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:40.132 00:25:40.132 real 0m11.199s 00:25:40.132 user 0m19.883s 00:25:40.132 sys 0m1.138s 00:25:40.132 07:31:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:40.132 ************************************ 00:25:40.132 END TEST fio_dif_1_multi_subsystems 00:25:40.132 ************************************ 00:25:40.132 07:31:41 -- common/autotest_common.sh@10 -- # set +x 00:25:40.132 07:31:41 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:25:40.132 07:31:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:40.132 07:31:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:40.132 07:31:41 -- common/autotest_common.sh@10 -- # set +x 00:25:40.132 ************************************ 00:25:40.132 START TEST fio_dif_rand_params 00:25:40.132 ************************************ 00:25:40.132 07:31:41 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:25:40.132 07:31:41 -- target/dif.sh@100 -- # local NULL_DIF 00:25:40.132 07:31:41 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:25:40.132 07:31:41 -- target/dif.sh@103 -- # NULL_DIF=3 00:25:40.132 07:31:41 -- target/dif.sh@103 -- # bs=128k 00:25:40.132 07:31:41 -- target/dif.sh@103 -- # numjobs=3 00:25:40.132 07:31:41 -- target/dif.sh@103 -- # iodepth=3 00:25:40.132 07:31:41 -- target/dif.sh@103 -- # runtime=5 00:25:40.132 07:31:41 -- target/dif.sh@105 -- # create_subsystems 0 00:25:40.132 07:31:41 -- target/dif.sh@28 -- # local sub 00:25:40.132 07:31:41 -- target/dif.sh@30 -- # for sub in "$@" 00:25:40.132 07:31:41 -- target/dif.sh@31 -- # create_subsystem 0 00:25:40.132 07:31:41 -- target/dif.sh@18 -- # local sub_id=0 00:25:40.132 07:31:41 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:40.132 07:31:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:40.132 07:31:41 -- common/autotest_common.sh@10 -- # set +x 00:25:40.132 bdev_null0 00:25:40.132 07:31:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:40.132 07:31:41 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:40.132 07:31:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:40.132 07:31:41 -- common/autotest_common.sh@10 -- # set +x 00:25:40.132 07:31:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:40.132 07:31:41 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:40.132 07:31:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:40.132 07:31:41 -- common/autotest_common.sh@10 -- # set +x 00:25:40.132 07:31:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:40.132 07:31:41 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:40.132 07:31:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:40.132 07:31:41 -- common/autotest_common.sh@10 -- # set +x 00:25:40.132 [2024-11-04 07:31:41.391636] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:40.132 07:31:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:40.132 07:31:41 -- target/dif.sh@106 -- # fio /dev/fd/62 00:25:40.132 07:31:41 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:25:40.132 07:31:41 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:40.133 07:31:41 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:40.133 07:31:41 -- nvmf/common.sh@520 -- # config=() 00:25:40.133 07:31:41 -- nvmf/common.sh@520 -- # local subsystem config 00:25:40.133 07:31:41 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:40.133 07:31:41 -- target/dif.sh@82 -- # gen_fio_conf 00:25:40.133 07:31:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:40.133 07:31:41 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:25:40.133 07:31:41 -- target/dif.sh@54 -- # local file 00:25:40.133 07:31:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:40.133 { 00:25:40.133 "params": { 00:25:40.133 "name": "Nvme$subsystem", 00:25:40.133 "trtype": "$TEST_TRANSPORT", 00:25:40.133 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:40.133 "adrfam": "ipv4", 00:25:40.133 "trsvcid": "$NVMF_PORT", 00:25:40.133 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:40.133 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:40.133 "hdgst": ${hdgst:-false}, 00:25:40.133 "ddgst": ${ddgst:-false} 00:25:40.133 }, 00:25:40.133 "method": "bdev_nvme_attach_controller" 00:25:40.133 } 00:25:40.133 EOF 00:25:40.133 )") 00:25:40.133 07:31:41 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:40.133 07:31:41 -- target/dif.sh@56 -- # cat 00:25:40.133 07:31:41 -- common/autotest_common.sh@1318 -- # local sanitizers 00:25:40.133 07:31:41 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:40.133 07:31:41 -- common/autotest_common.sh@1320 -- # shift 00:25:40.133 07:31:41 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:25:40.133 07:31:41 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:40.133 07:31:41 -- nvmf/common.sh@542 -- # cat 00:25:40.133 07:31:41 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:40.133 07:31:41 -- common/autotest_common.sh@1324 -- # grep libasan 00:25:40.133 07:31:41 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:40.133 07:31:41 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:40.133 07:31:41 -- target/dif.sh@72 -- # (( file <= files )) 00:25:40.133 07:31:41 -- nvmf/common.sh@544 -- # jq . 00:25:40.133 07:31:41 -- nvmf/common.sh@545 -- # IFS=, 00:25:40.133 07:31:41 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:40.133 "params": { 00:25:40.133 "name": "Nvme0", 00:25:40.133 "trtype": "tcp", 00:25:40.133 "traddr": "10.0.0.2", 00:25:40.133 "adrfam": "ipv4", 00:25:40.133 "trsvcid": "4420", 00:25:40.133 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:40.133 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:40.133 "hdgst": false, 00:25:40.133 "ddgst": false 00:25:40.133 }, 00:25:40.133 "method": "bdev_nvme_attach_controller" 00:25:40.133 }' 00:25:40.133 07:31:41 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:40.133 07:31:41 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:40.133 07:31:41 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:40.133 07:31:41 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:25:40.133 07:31:41 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:40.133 07:31:41 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:40.133 07:31:41 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:40.133 07:31:41 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:40.133 07:31:41 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:40.133 07:31:41 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:40.133 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:40.133 ... 00:25:40.133 fio-3.35 00:25:40.133 Starting 3 threads 00:25:40.391 [2024-11-04 07:31:42.026582] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:40.391 [2024-11-04 07:31:42.026676] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:45.656 00:25:45.656 filename0: (groupid=0, jobs=1): err= 0: pid=102069: Mon Nov 4 07:31:47 2024 00:25:45.656 read: IOPS=263, BW=33.0MiB/s (34.6MB/s)(165MiB/5003msec) 00:25:45.656 slat (nsec): min=5966, max=58606, avg=12673.34, stdev=6415.44 00:25:45.656 clat (usec): min=3904, max=52871, avg=11357.77, stdev=9275.82 00:25:45.656 lat (usec): min=3912, max=52898, avg=11370.44, stdev=9276.17 00:25:45.656 clat percentiles (usec): 00:25:45.656 | 1.00th=[ 4752], 5.00th=[ 5800], 10.00th=[ 6390], 20.00th=[ 6980], 00:25:45.656 | 30.00th=[ 8848], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10290], 00:25:45.656 | 70.00th=[10552], 80.00th=[10945], 90.00th=[11600], 95.00th=[46924], 00:25:45.656 | 99.00th=[51643], 99.50th=[51643], 99.90th=[52691], 99.95th=[52691], 00:25:45.656 | 99.99th=[52691] 00:25:45.656 bw ( KiB/s): min=25600, max=43008, per=30.60%, avg=33528.22, stdev=7204.10, samples=9 00:25:45.656 iops : min= 200, max= 336, avg=261.89, stdev=56.27, samples=9 00:25:45.656 lat (msec) : 4=0.53%, 10=50.87%, 20=43.37%, 50=1.97%, 100=3.26% 00:25:45.656 cpu : usr=94.08%, sys=4.44%, ctx=6, majf=0, minf=0 00:25:45.656 IO depths : 1=4.9%, 2=95.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:45.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:45.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:45.656 issued rwts: total=1319,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:45.656 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:45.656 filename0: (groupid=0, jobs=1): err= 0: pid=102070: Mon Nov 4 07:31:47 2024 00:25:45.656 read: IOPS=305, BW=38.2MiB/s (40.0MB/s)(191MiB/5003msec) 00:25:45.656 slat (nsec): min=5775, max=47254, avg=10797.67, stdev=6140.68 00:25:45.656 clat (usec): min=2855, max=53008, avg=9798.70, stdev=4600.56 00:25:45.656 lat (usec): min=2865, max=53046, avg=9809.50, stdev=4601.59 00:25:45.656 clat percentiles (usec): 00:25:45.656 | 1.00th=[ 3589], 5.00th=[ 3621], 10.00th=[ 3687], 20.00th=[ 7242], 00:25:45.656 | 30.00th=[ 7832], 40.00th=[ 8455], 50.00th=[10552], 60.00th=[11863], 00:25:45.656 | 70.00th=[12256], 80.00th=[12518], 90.00th=[12911], 95.00th=[13435], 00:25:45.656 | 99.00th=[14877], 99.50th=[48497], 99.90th=[53216], 99.95th=[53216], 00:25:45.656 | 99.99th=[53216] 00:25:45.656 bw ( KiB/s): min=32256, max=49152, per=36.27%, avg=39736.89, stdev=5845.80, samples=9 00:25:45.656 iops : min= 252, max= 384, avg=310.44, stdev=45.67, samples=9 00:25:45.656 lat (msec) : 4=14.99%, 10=33.70%, 20=50.72%, 50=0.20%, 100=0.39% 00:25:45.656 cpu : usr=94.34%, sys=4.00%, ctx=68, majf=0, minf=0 00:25:45.656 IO depths : 1=25.4%, 2=74.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:45.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:45.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:45.656 issued rwts: total=1528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:45.656 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:45.656 filename0: (groupid=0, jobs=1): err= 0: pid=102071: Mon Nov 4 07:31:47 2024 00:25:45.656 read: IOPS=287, BW=35.9MiB/s (37.6MB/s)(180MiB/5005msec) 00:25:45.656 slat (usec): min=6, max=100, avg=15.23, stdev= 7.01 00:25:45.656 clat (usec): min=3255, max=51879, avg=10424.89, stdev=9421.49 00:25:45.656 lat (usec): min=3265, max=51894, avg=10440.12, stdev=9421.28 00:25:45.656 clat percentiles (usec): 00:25:45.656 | 1.00th=[ 3916], 5.00th=[ 5669], 10.00th=[ 6194], 20.00th=[ 6783], 00:25:45.656 | 30.00th=[ 7963], 40.00th=[ 8356], 50.00th=[ 8586], 60.00th=[ 8848], 00:25:45.656 | 70.00th=[ 9110], 80.00th=[ 9503], 90.00th=[ 9896], 95.00th=[47449], 00:25:45.656 | 99.00th=[50070], 99.50th=[50594], 99.90th=[51119], 99.95th=[51643], 00:25:45.656 | 99.99th=[51643] 00:25:45.656 bw ( KiB/s): min=27904, max=44544, per=33.08%, avg=36238.22, stdev=4741.38, samples=9 00:25:45.656 iops : min= 218, max= 348, avg=283.11, stdev=37.04, samples=9 00:25:45.656 lat (msec) : 4=1.32%, 10=89.42%, 20=3.83%, 50=4.18%, 100=1.25% 00:25:45.656 cpu : usr=95.04%, sys=3.60%, ctx=27, majf=0, minf=0 00:25:45.656 IO depths : 1=1.5%, 2=98.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:45.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:45.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:45.656 issued rwts: total=1437,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:45.656 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:45.656 00:25:45.656 Run status group 0 (all jobs): 00:25:45.656 READ: bw=107MiB/s (112MB/s), 33.0MiB/s-38.2MiB/s (34.6MB/s-40.0MB/s), io=536MiB (562MB), run=5003-5005msec 00:25:45.656 07:31:47 -- target/dif.sh@107 -- # destroy_subsystems 0 00:25:45.656 07:31:47 -- target/dif.sh@43 -- # local sub 00:25:45.656 07:31:47 -- target/dif.sh@45 -- # for sub in "$@" 00:25:45.656 07:31:47 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:45.656 07:31:47 -- target/dif.sh@36 -- # local sub_id=0 00:25:45.656 07:31:47 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:45.656 07:31:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.656 07:31:47 -- common/autotest_common.sh@10 -- # set +x 00:25:45.656 07:31:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.656 07:31:47 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:45.656 07:31:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.656 07:31:47 -- common/autotest_common.sh@10 -- # set +x 00:25:45.656 07:31:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.656 07:31:47 -- target/dif.sh@109 -- # NULL_DIF=2 00:25:45.656 07:31:47 -- target/dif.sh@109 -- # bs=4k 00:25:45.656 07:31:47 -- target/dif.sh@109 -- # numjobs=8 00:25:45.656 07:31:47 -- target/dif.sh@109 -- # iodepth=16 00:25:45.656 07:31:47 -- target/dif.sh@109 -- # runtime= 00:25:45.656 07:31:47 -- target/dif.sh@109 -- # files=2 00:25:45.656 07:31:47 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:25:45.656 07:31:47 -- target/dif.sh@28 -- # local sub 00:25:45.656 07:31:47 -- target/dif.sh@30 -- # for sub in "$@" 00:25:45.656 07:31:47 -- target/dif.sh@31 -- # create_subsystem 0 00:25:45.656 07:31:47 -- target/dif.sh@18 -- # local sub_id=0 00:25:45.656 07:31:47 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:25:45.656 07:31:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.656 07:31:47 -- common/autotest_common.sh@10 -- # set +x 00:25:45.656 bdev_null0 00:25:45.656 07:31:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.656 07:31:47 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:45.656 07:31:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.656 07:31:47 -- common/autotest_common.sh@10 -- # set +x 00:25:45.656 07:31:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.656 07:31:47 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:45.656 07:31:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.656 07:31:47 -- common/autotest_common.sh@10 -- # set +x 00:25:45.656 07:31:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.656 07:31:47 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:45.656 07:31:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.656 07:31:47 -- common/autotest_common.sh@10 -- # set +x 00:25:45.656 [2024-11-04 07:31:47.400673] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:45.656 07:31:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.656 07:31:47 -- target/dif.sh@30 -- # for sub in "$@" 00:25:45.656 07:31:47 -- target/dif.sh@31 -- # create_subsystem 1 00:25:45.656 07:31:47 -- target/dif.sh@18 -- # local sub_id=1 00:25:45.656 07:31:47 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:25:45.656 07:31:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.656 07:31:47 -- common/autotest_common.sh@10 -- # set +x 00:25:45.656 bdev_null1 00:25:45.656 07:31:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.656 07:31:47 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:45.656 07:31:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.656 07:31:47 -- common/autotest_common.sh@10 -- # set +x 00:25:45.656 07:31:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.656 07:31:47 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:45.656 07:31:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.656 07:31:47 -- common/autotest_common.sh@10 -- # set +x 00:25:45.656 07:31:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.656 07:31:47 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:45.656 07:31:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.656 07:31:47 -- common/autotest_common.sh@10 -- # set +x 00:25:45.656 07:31:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.656 07:31:47 -- target/dif.sh@30 -- # for sub in "$@" 00:25:45.656 07:31:47 -- target/dif.sh@31 -- # create_subsystem 2 00:25:45.656 07:31:47 -- target/dif.sh@18 -- # local sub_id=2 00:25:45.656 07:31:47 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:25:45.656 07:31:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.656 07:31:47 -- common/autotest_common.sh@10 -- # set +x 00:25:45.656 bdev_null2 00:25:45.656 07:31:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.656 07:31:47 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:25:45.656 07:31:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.657 07:31:47 -- common/autotest_common.sh@10 -- # set +x 00:25:45.657 07:31:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.657 07:31:47 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:25:45.657 07:31:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.657 07:31:47 -- common/autotest_common.sh@10 -- # set +x 00:25:45.657 07:31:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.657 07:31:47 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:45.657 07:31:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.657 07:31:47 -- common/autotest_common.sh@10 -- # set +x 00:25:45.657 07:31:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.657 07:31:47 -- target/dif.sh@112 -- # fio /dev/fd/62 00:25:45.657 07:31:47 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:25:45.657 07:31:47 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:25:45.657 07:31:47 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:45.657 07:31:47 -- target/dif.sh@82 -- # gen_fio_conf 00:25:45.657 07:31:47 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:45.657 07:31:47 -- target/dif.sh@54 -- # local file 00:25:45.657 07:31:47 -- target/dif.sh@56 -- # cat 00:25:45.657 07:31:47 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:25:45.657 07:31:47 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:45.657 07:31:47 -- common/autotest_common.sh@1318 -- # local sanitizers 00:25:45.657 07:31:47 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:45.657 07:31:47 -- common/autotest_common.sh@1320 -- # shift 00:25:45.657 07:31:47 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:25:45.657 07:31:47 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:45.657 07:31:47 -- nvmf/common.sh@520 -- # config=() 00:25:45.657 07:31:47 -- nvmf/common.sh@520 -- # local subsystem config 00:25:45.657 07:31:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:45.657 07:31:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:45.657 { 00:25:45.657 "params": { 00:25:45.657 "name": "Nvme$subsystem", 00:25:45.657 "trtype": "$TEST_TRANSPORT", 00:25:45.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:45.657 "adrfam": "ipv4", 00:25:45.657 "trsvcid": "$NVMF_PORT", 00:25:45.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:45.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:45.657 "hdgst": ${hdgst:-false}, 00:25:45.657 "ddgst": ${ddgst:-false} 00:25:45.657 }, 00:25:45.657 "method": "bdev_nvme_attach_controller" 00:25:45.657 } 00:25:45.657 EOF 00:25:45.657 )") 00:25:45.657 07:31:47 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:45.657 07:31:47 -- target/dif.sh@72 -- # (( file <= files )) 00:25:45.657 07:31:47 -- nvmf/common.sh@542 -- # cat 00:25:45.657 07:31:47 -- target/dif.sh@73 -- # cat 00:25:45.657 07:31:47 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:45.657 07:31:47 -- common/autotest_common.sh@1324 -- # grep libasan 00:25:45.657 07:31:47 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:45.657 07:31:47 -- target/dif.sh@72 -- # (( file++ )) 00:25:45.657 07:31:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:45.657 07:31:47 -- target/dif.sh@72 -- # (( file <= files )) 00:25:45.657 07:31:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:45.657 { 00:25:45.657 "params": { 00:25:45.657 "name": "Nvme$subsystem", 00:25:45.657 "trtype": "$TEST_TRANSPORT", 00:25:45.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:45.657 "adrfam": "ipv4", 00:25:45.657 "trsvcid": "$NVMF_PORT", 00:25:45.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:45.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:45.657 "hdgst": ${hdgst:-false}, 00:25:45.657 "ddgst": ${ddgst:-false} 00:25:45.657 }, 00:25:45.657 "method": "bdev_nvme_attach_controller" 00:25:45.657 } 00:25:45.657 EOF 00:25:45.657 )") 00:25:45.657 07:31:47 -- target/dif.sh@73 -- # cat 00:25:45.657 07:31:47 -- nvmf/common.sh@542 -- # cat 00:25:45.657 07:31:47 -- target/dif.sh@72 -- # (( file++ )) 00:25:45.657 07:31:47 -- target/dif.sh@72 -- # (( file <= files )) 00:25:45.657 07:31:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:45.657 07:31:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:45.657 { 00:25:45.657 "params": { 00:25:45.657 "name": "Nvme$subsystem", 00:25:45.657 "trtype": "$TEST_TRANSPORT", 00:25:45.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:45.657 "adrfam": "ipv4", 00:25:45.657 "trsvcid": "$NVMF_PORT", 00:25:45.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:45.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:45.657 "hdgst": ${hdgst:-false}, 00:25:45.657 "ddgst": ${ddgst:-false} 00:25:45.657 }, 00:25:45.657 "method": "bdev_nvme_attach_controller" 00:25:45.657 } 00:25:45.657 EOF 00:25:45.657 )") 00:25:45.657 07:31:47 -- nvmf/common.sh@542 -- # cat 00:25:45.657 07:31:47 -- nvmf/common.sh@544 -- # jq . 00:25:45.657 07:31:47 -- nvmf/common.sh@545 -- # IFS=, 00:25:45.657 07:31:47 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:45.657 "params": { 00:25:45.657 "name": "Nvme0", 00:25:45.657 "trtype": "tcp", 00:25:45.657 "traddr": "10.0.0.2", 00:25:45.657 "adrfam": "ipv4", 00:25:45.657 "trsvcid": "4420", 00:25:45.657 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:45.657 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:45.657 "hdgst": false, 00:25:45.657 "ddgst": false 00:25:45.657 }, 00:25:45.657 "method": "bdev_nvme_attach_controller" 00:25:45.657 },{ 00:25:45.657 "params": { 00:25:45.657 "name": "Nvme1", 00:25:45.657 "trtype": "tcp", 00:25:45.657 "traddr": "10.0.0.2", 00:25:45.657 "adrfam": "ipv4", 00:25:45.657 "trsvcid": "4420", 00:25:45.657 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:45.657 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:45.657 "hdgst": false, 00:25:45.657 "ddgst": false 00:25:45.657 }, 00:25:45.657 "method": "bdev_nvme_attach_controller" 00:25:45.657 },{ 00:25:45.657 "params": { 00:25:45.657 "name": "Nvme2", 00:25:45.657 "trtype": "tcp", 00:25:45.657 "traddr": "10.0.0.2", 00:25:45.657 "adrfam": "ipv4", 00:25:45.657 "trsvcid": "4420", 00:25:45.657 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:45.657 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:45.657 "hdgst": false, 00:25:45.657 "ddgst": false 00:25:45.657 }, 00:25:45.657 "method": "bdev_nvme_attach_controller" 00:25:45.657 }' 00:25:45.916 07:31:47 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:45.916 07:31:47 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:45.916 07:31:47 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:45.916 07:31:47 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:45.916 07:31:47 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:45.916 07:31:47 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:25:45.916 07:31:47 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:45.916 07:31:47 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:45.916 07:31:47 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:45.916 07:31:47 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:45.916 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:45.916 ... 00:25:45.916 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:45.916 ... 00:25:45.916 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:45.916 ... 00:25:45.916 fio-3.35 00:25:45.916 Starting 24 threads 00:25:46.482 [2024-11-04 07:31:48.289412] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:46.482 [2024-11-04 07:31:48.289478] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:58.682 00:25:58.682 filename0: (groupid=0, jobs=1): err= 0: pid=102167: Mon Nov 4 07:31:58 2024 00:25:58.682 read: IOPS=235, BW=941KiB/s (963kB/s)(9408KiB/10003msec) 00:25:58.682 slat (usec): min=4, max=4031, avg=24.02, stdev=209.89 00:25:58.682 clat (msec): min=2, max=150, avg=67.83, stdev=18.92 00:25:58.682 lat (msec): min=2, max=150, avg=67.85, stdev=18.93 00:25:58.682 clat percentiles (msec): 00:25:58.682 | 1.00th=[ 21], 5.00th=[ 42], 10.00th=[ 50], 20.00th=[ 56], 00:25:58.682 | 30.00th=[ 60], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 68], 00:25:58.682 | 70.00th=[ 77], 80.00th=[ 86], 90.00th=[ 90], 95.00th=[ 101], 00:25:58.682 | 99.00th=[ 130], 99.50th=[ 132], 99.90th=[ 150], 99.95th=[ 150], 00:25:58.682 | 99.99th=[ 150] 00:25:58.682 bw ( KiB/s): min= 656, max= 1152, per=3.69%, avg=915.58, stdev=120.91, samples=19 00:25:58.682 iops : min= 164, max= 288, avg=228.84, stdev=30.27, samples=19 00:25:58.682 lat (msec) : 4=0.38%, 10=0.26%, 20=0.04%, 50=9.91%, 100=84.01% 00:25:58.682 lat (msec) : 250=5.40% 00:25:58.682 cpu : usr=44.10%, sys=0.72%, ctx=1611, majf=0, minf=9 00:25:58.682 IO depths : 1=2.8%, 2=6.5%, 4=17.1%, 8=63.6%, 16=10.0%, 32=0.0%, >=64=0.0% 00:25:58.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.682 complete : 0=0.0%, 4=92.0%, 8=2.6%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.682 issued rwts: total=2352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.682 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.682 filename0: (groupid=0, jobs=1): err= 0: pid=102168: Mon Nov 4 07:31:58 2024 00:25:58.682 read: IOPS=252, BW=1012KiB/s (1036kB/s)(9.91MiB/10036msec) 00:25:58.682 slat (usec): min=6, max=8012, avg=15.61, stdev=158.98 00:25:58.682 clat (msec): min=10, max=143, avg=63.09, stdev=20.85 00:25:58.682 lat (msec): min=10, max=143, avg=63.10, stdev=20.85 00:25:58.682 clat percentiles (msec): 00:25:58.682 | 1.00th=[ 22], 5.00th=[ 34], 10.00th=[ 37], 20.00th=[ 48], 00:25:58.682 | 30.00th=[ 56], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 62], 00:25:58.682 | 70.00th=[ 71], 80.00th=[ 80], 90.00th=[ 93], 95.00th=[ 105], 00:25:58.682 | 99.00th=[ 123], 99.50th=[ 128], 99.90th=[ 144], 99.95th=[ 144], 00:25:58.682 | 99.99th=[ 144] 00:25:58.682 bw ( KiB/s): min= 592, max= 1261, per=4.06%, avg=1008.65, stdev=170.70, samples=20 00:25:58.682 iops : min= 148, max= 315, avg=252.15, stdev=42.65, samples=20 00:25:58.682 lat (msec) : 20=0.63%, 50=25.73%, 100=67.61%, 250=6.03% 00:25:58.682 cpu : usr=32.69%, sys=0.49%, ctx=858, majf=0, minf=9 00:25:58.682 IO depths : 1=1.6%, 2=3.7%, 4=12.4%, 8=70.8%, 16=11.6%, 32=0.0%, >=64=0.0% 00:25:58.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.682 complete : 0=0.0%, 4=90.3%, 8=4.7%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.682 issued rwts: total=2538,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.682 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.682 filename0: (groupid=0, jobs=1): err= 0: pid=102169: Mon Nov 4 07:31:58 2024 00:25:58.682 read: IOPS=301, BW=1206KiB/s (1235kB/s)(11.8MiB/10046msec) 00:25:58.682 slat (usec): min=6, max=8023, avg=14.20, stdev=145.73 00:25:58.682 clat (msec): min=2, max=120, avg=52.87, stdev=18.78 00:25:58.682 lat (msec): min=2, max=120, avg=52.88, stdev=18.78 00:25:58.682 clat percentiles (msec): 00:25:58.682 | 1.00th=[ 4], 5.00th=[ 31], 10.00th=[ 35], 20.00th=[ 39], 00:25:58.682 | 30.00th=[ 42], 40.00th=[ 46], 50.00th=[ 52], 60.00th=[ 57], 00:25:58.682 | 70.00th=[ 61], 80.00th=[ 68], 90.00th=[ 77], 95.00th=[ 85], 00:25:58.682 | 99.00th=[ 112], 99.50th=[ 118], 99.90th=[ 122], 99.95th=[ 122], 00:25:58.682 | 99.99th=[ 122] 00:25:58.682 bw ( KiB/s): min= 816, max= 1920, per=4.85%, avg=1203.90, stdev=240.04, samples=20 00:25:58.682 iops : min= 204, max= 480, avg=300.90, stdev=60.01, samples=20 00:25:58.682 lat (msec) : 4=1.06%, 10=1.06%, 20=1.58%, 50=43.10%, 100=51.06% 00:25:58.682 lat (msec) : 250=2.15% 00:25:58.682 cpu : usr=43.31%, sys=0.69%, ctx=1384, majf=0, minf=0 00:25:58.682 IO depths : 1=1.2%, 2=2.6%, 4=9.8%, 8=74.0%, 16=12.3%, 32=0.0%, >=64=0.0% 00:25:58.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.682 complete : 0=0.0%, 4=90.1%, 8=5.3%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.682 issued rwts: total=3030,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.682 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.682 filename0: (groupid=0, jobs=1): err= 0: pid=102170: Mon Nov 4 07:31:58 2024 00:25:58.682 read: IOPS=239, BW=959KiB/s (982kB/s)(9608KiB/10015msec) 00:25:58.682 slat (usec): min=4, max=8031, avg=16.07, stdev=163.81 00:25:58.682 clat (msec): min=20, max=131, avg=66.57, stdev=19.03 00:25:58.682 lat (msec): min=20, max=131, avg=66.58, stdev=19.04 00:25:58.682 clat percentiles (msec): 00:25:58.682 | 1.00th=[ 31], 5.00th=[ 39], 10.00th=[ 46], 20.00th=[ 54], 00:25:58.682 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 62], 60.00th=[ 68], 00:25:58.682 | 70.00th=[ 72], 80.00th=[ 84], 90.00th=[ 94], 95.00th=[ 103], 00:25:58.682 | 99.00th=[ 129], 99.50th=[ 130], 99.90th=[ 132], 99.95th=[ 132], 00:25:58.682 | 99.99th=[ 132] 00:25:58.682 bw ( KiB/s): min= 624, max= 1200, per=3.85%, avg=954.15, stdev=143.90, samples=20 00:25:58.682 iops : min= 156, max= 300, avg=238.50, stdev=35.92, samples=20 00:25:58.682 lat (msec) : 50=16.36%, 100=77.69%, 250=5.95% 00:25:58.682 cpu : usr=38.58%, sys=0.47%, ctx=1149, majf=0, minf=9 00:25:58.682 IO depths : 1=2.3%, 2=5.5%, 4=15.4%, 8=66.0%, 16=10.8%, 32=0.0%, >=64=0.0% 00:25:58.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.682 complete : 0=0.0%, 4=91.5%, 8=3.5%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.682 issued rwts: total=2402,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.682 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.682 filename0: (groupid=0, jobs=1): err= 0: pid=102171: Mon Nov 4 07:31:58 2024 00:25:58.682 read: IOPS=287, BW=1152KiB/s (1179kB/s)(11.3MiB/10033msec) 00:25:58.682 slat (usec): min=5, max=7036, avg=20.23, stdev=198.24 00:25:58.682 clat (msec): min=26, max=122, avg=55.42, stdev=16.10 00:25:58.682 lat (msec): min=26, max=122, avg=55.44, stdev=16.10 00:25:58.682 clat percentiles (msec): 00:25:58.682 | 1.00th=[ 32], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 41], 00:25:58.682 | 30.00th=[ 46], 40.00th=[ 50], 50.00th=[ 55], 60.00th=[ 58], 00:25:58.682 | 70.00th=[ 62], 80.00th=[ 67], 90.00th=[ 78], 95.00th=[ 85], 00:25:58.682 | 99.00th=[ 107], 99.50th=[ 115], 99.90th=[ 123], 99.95th=[ 123], 00:25:58.682 | 99.99th=[ 123] 00:25:58.682 bw ( KiB/s): min= 856, max= 1504, per=4.63%, avg=1149.50, stdev=153.00, samples=20 00:25:58.682 iops : min= 214, max= 376, avg=287.30, stdev=38.24, samples=20 00:25:58.682 lat (msec) : 50=42.89%, 100=54.93%, 250=2.18% 00:25:58.682 cpu : usr=41.55%, sys=0.52%, ctx=1241, majf=0, minf=9 00:25:58.682 IO depths : 1=0.7%, 2=1.8%, 4=8.5%, 8=76.1%, 16=12.9%, 32=0.0%, >=64=0.0% 00:25:58.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.682 complete : 0=0.0%, 4=89.7%, 8=5.9%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.682 issued rwts: total=2889,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.682 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.682 filename0: (groupid=0, jobs=1): err= 0: pid=102172: Mon Nov 4 07:31:58 2024 00:25:58.682 read: IOPS=248, BW=992KiB/s (1016kB/s)(9944KiB/10021msec) 00:25:58.682 slat (usec): min=4, max=8029, avg=17.21, stdev=179.65 00:25:58.682 clat (msec): min=20, max=157, avg=64.34, stdev=19.44 00:25:58.682 lat (msec): min=20, max=157, avg=64.36, stdev=19.44 00:25:58.682 clat percentiles (msec): 00:25:58.682 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 48], 00:25:58.682 | 30.00th=[ 55], 40.00th=[ 60], 50.00th=[ 61], 60.00th=[ 69], 00:25:58.682 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 93], 95.00th=[ 97], 00:25:58.682 | 99.00th=[ 116], 99.50th=[ 131], 99.90th=[ 159], 99.95th=[ 159], 00:25:58.682 | 99.99th=[ 159] 00:25:58.682 bw ( KiB/s): min= 640, max= 1376, per=3.99%, avg=989.70, stdev=203.10, samples=20 00:25:58.682 iops : min= 160, max= 344, avg=247.40, stdev=50.75, samples=20 00:25:58.682 lat (msec) : 50=24.82%, 100=71.68%, 250=3.50% 00:25:58.682 cpu : usr=38.28%, sys=0.53%, ctx=967, majf=0, minf=9 00:25:58.682 IO depths : 1=1.5%, 2=3.6%, 4=12.8%, 8=70.6%, 16=11.6%, 32=0.0%, >=64=0.0% 00:25:58.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.682 complete : 0=0.0%, 4=90.7%, 8=4.3%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.682 issued rwts: total=2486,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.682 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.682 filename0: (groupid=0, jobs=1): err= 0: pid=102173: Mon Nov 4 07:31:58 2024 00:25:58.682 read: IOPS=243, BW=973KiB/s (997kB/s)(9736KiB/10003msec) 00:25:58.682 slat (usec): min=3, max=8026, avg=15.91, stdev=162.63 00:25:58.682 clat (msec): min=6, max=135, avg=65.66, stdev=19.29 00:25:58.682 lat (msec): min=6, max=135, avg=65.67, stdev=19.29 00:25:58.682 clat percentiles (msec): 00:25:58.682 | 1.00th=[ 31], 5.00th=[ 37], 10.00th=[ 44], 20.00th=[ 53], 00:25:58.682 | 30.00th=[ 57], 40.00th=[ 59], 50.00th=[ 62], 60.00th=[ 66], 00:25:58.682 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 93], 95.00th=[ 101], 00:25:58.682 | 99.00th=[ 125], 99.50th=[ 130], 99.90th=[ 136], 99.95th=[ 136], 00:25:58.682 | 99.99th=[ 136] 00:25:58.682 bw ( KiB/s): min= 592, max= 1280, per=3.85%, avg=956.68, stdev=172.12, samples=19 00:25:58.682 iops : min= 148, max= 320, avg=239.16, stdev=43.03, samples=19 00:25:58.682 lat (msec) : 10=0.37%, 50=17.30%, 100=77.36%, 250=4.97% 00:25:58.682 cpu : usr=43.43%, sys=0.47%, ctx=1266, majf=0, minf=9 00:25:58.682 IO depths : 1=1.8%, 2=4.4%, 4=12.7%, 8=69.4%, 16=11.7%, 32=0.0%, >=64=0.0% 00:25:58.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.682 complete : 0=0.0%, 4=91.0%, 8=4.3%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.682 issued rwts: total=2434,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.682 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.682 filename0: (groupid=0, jobs=1): err= 0: pid=102174: Mon Nov 4 07:31:58 2024 00:25:58.682 read: IOPS=233, BW=933KiB/s (955kB/s)(9336KiB/10010msec) 00:25:58.682 slat (nsec): min=3362, max=47266, avg=12144.55, stdev=7468.40 00:25:58.682 clat (msec): min=20, max=137, avg=68.52, stdev=18.26 00:25:58.683 lat (msec): min=20, max=137, avg=68.53, stdev=18.26 00:25:58.683 clat percentiles (msec): 00:25:58.683 | 1.00th=[ 33], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 57], 00:25:58.683 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 70], 00:25:58.683 | 70.00th=[ 73], 80.00th=[ 84], 90.00th=[ 94], 95.00th=[ 103], 00:25:58.683 | 99.00th=[ 130], 99.50th=[ 132], 99.90th=[ 138], 99.95th=[ 138], 00:25:58.683 | 99.99th=[ 138] 00:25:58.683 bw ( KiB/s): min= 712, max= 1224, per=3.74%, avg=928.84, stdev=124.25, samples=19 00:25:58.683 iops : min= 178, max= 306, avg=232.21, stdev=31.06, samples=19 00:25:58.683 lat (msec) : 50=11.35%, 100=83.03%, 250=5.61% 00:25:58.683 cpu : usr=38.08%, sys=0.48%, ctx=1071, majf=0, minf=9 00:25:58.683 IO depths : 1=2.3%, 2=5.8%, 4=16.2%, 8=65.0%, 16=10.8%, 32=0.0%, >=64=0.0% 00:25:58.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.683 complete : 0=0.0%, 4=91.9%, 8=3.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.683 issued rwts: total=2334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.683 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.683 filename1: (groupid=0, jobs=1): err= 0: pid=102175: Mon Nov 4 07:31:58 2024 00:25:58.683 read: IOPS=279, BW=1118KiB/s (1145kB/s)(10.9MiB/10026msec) 00:25:58.683 slat (usec): min=3, max=8027, avg=24.53, stdev=303.56 00:25:58.683 clat (msec): min=15, max=115, avg=57.09, stdev=17.17 00:25:58.683 lat (msec): min=15, max=115, avg=57.11, stdev=17.17 00:25:58.683 clat percentiles (msec): 00:25:58.683 | 1.00th=[ 24], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 43], 00:25:58.683 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 58], 60.00th=[ 61], 00:25:58.683 | 70.00th=[ 64], 80.00th=[ 71], 90.00th=[ 83], 95.00th=[ 88], 00:25:58.683 | 99.00th=[ 100], 99.50th=[ 103], 99.90th=[ 116], 99.95th=[ 116], 00:25:58.683 | 99.99th=[ 116] 00:25:58.683 bw ( KiB/s): min= 896, max= 1456, per=4.49%, avg=1114.80, stdev=151.80, samples=20 00:25:58.683 iops : min= 224, max= 364, avg=278.70, stdev=37.95, samples=20 00:25:58.683 lat (msec) : 20=0.57%, 50=39.49%, 100=59.15%, 250=0.78% 00:25:58.683 cpu : usr=32.82%, sys=0.38%, ctx=857, majf=0, minf=9 00:25:58.683 IO depths : 1=0.4%, 2=0.9%, 4=5.7%, 8=79.3%, 16=13.7%, 32=0.0%, >=64=0.0% 00:25:58.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.683 complete : 0=0.0%, 4=89.2%, 8=6.8%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.683 issued rwts: total=2803,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.683 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.683 filename1: (groupid=0, jobs=1): err= 0: pid=102176: Mon Nov 4 07:31:58 2024 00:25:58.683 read: IOPS=249, BW=997KiB/s (1021kB/s)(9984KiB/10017msec) 00:25:58.683 slat (usec): min=3, max=8027, avg=18.73, stdev=189.02 00:25:58.683 clat (msec): min=19, max=133, avg=64.10, stdev=18.51 00:25:58.683 lat (msec): min=19, max=133, avg=64.12, stdev=18.52 00:25:58.683 clat percentiles (msec): 00:25:58.683 | 1.00th=[ 23], 5.00th=[ 37], 10.00th=[ 42], 20.00th=[ 48], 00:25:58.683 | 30.00th=[ 57], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 66], 00:25:58.683 | 70.00th=[ 72], 80.00th=[ 79], 90.00th=[ 92], 95.00th=[ 96], 00:25:58.683 | 99.00th=[ 118], 99.50th=[ 120], 99.90th=[ 134], 99.95th=[ 134], 00:25:58.683 | 99.99th=[ 134] 00:25:58.683 bw ( KiB/s): min= 640, max= 1280, per=4.00%, avg=993.05, stdev=158.57, samples=20 00:25:58.683 iops : min= 160, max= 320, avg=248.25, stdev=39.64, samples=20 00:25:58.683 lat (msec) : 20=0.20%, 50=21.75%, 100=74.32%, 250=3.73% 00:25:58.683 cpu : usr=38.81%, sys=0.45%, ctx=1269, majf=0, minf=9 00:25:58.683 IO depths : 1=1.7%, 2=3.5%, 4=11.5%, 8=71.6%, 16=11.7%, 32=0.0%, >=64=0.0% 00:25:58.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.683 complete : 0=0.0%, 4=90.5%, 8=4.8%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.683 issued rwts: total=2496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.683 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.683 filename1: (groupid=0, jobs=1): err= 0: pid=102177: Mon Nov 4 07:31:58 2024 00:25:58.683 read: IOPS=263, BW=1054KiB/s (1080kB/s)(10.3MiB/10036msec) 00:25:58.683 slat (usec): min=4, max=8019, avg=21.10, stdev=249.64 00:25:58.683 clat (msec): min=20, max=128, avg=60.55, stdev=18.63 00:25:58.683 lat (msec): min=20, max=128, avg=60.57, stdev=18.63 00:25:58.683 clat percentiles (msec): 00:25:58.683 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 47], 00:25:58.683 | 30.00th=[ 51], 40.00th=[ 58], 50.00th=[ 60], 60.00th=[ 61], 00:25:58.683 | 70.00th=[ 69], 80.00th=[ 73], 90.00th=[ 87], 95.00th=[ 96], 00:25:58.683 | 99.00th=[ 111], 99.50th=[ 120], 99.90th=[ 129], 99.95th=[ 129], 00:25:58.683 | 99.99th=[ 129] 00:25:58.683 bw ( KiB/s): min= 640, max= 1480, per=4.24%, avg=1051.50, stdev=203.87, samples=20 00:25:58.683 iops : min= 160, max= 370, avg=262.85, stdev=50.98, samples=20 00:25:58.683 lat (msec) : 50=30.25%, 100=66.24%, 250=3.52% 00:25:58.683 cpu : usr=35.71%, sys=0.48%, ctx=963, majf=0, minf=9 00:25:58.683 IO depths : 1=1.6%, 2=3.7%, 4=11.5%, 8=71.5%, 16=11.7%, 32=0.0%, >=64=0.0% 00:25:58.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.683 complete : 0=0.0%, 4=90.5%, 8=4.7%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.683 issued rwts: total=2645,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.683 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.683 filename1: (groupid=0, jobs=1): err= 0: pid=102178: Mon Nov 4 07:31:58 2024 00:25:58.683 read: IOPS=240, BW=961KiB/s (984kB/s)(9624KiB/10019msec) 00:25:58.683 slat (usec): min=4, max=8020, avg=26.86, stdev=315.66 00:25:58.683 clat (msec): min=25, max=141, avg=66.41, stdev=17.80 00:25:58.683 lat (msec): min=25, max=141, avg=66.43, stdev=17.80 00:25:58.683 clat percentiles (msec): 00:25:58.683 | 1.00th=[ 34], 5.00th=[ 40], 10.00th=[ 46], 20.00th=[ 56], 00:25:58.683 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 67], 00:25:58.683 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 90], 95.00th=[ 97], 00:25:58.683 | 99.00th=[ 120], 99.50th=[ 140], 99.90th=[ 142], 99.95th=[ 142], 00:25:58.683 | 99.99th=[ 142] 00:25:58.683 bw ( KiB/s): min= 640, max= 1152, per=3.86%, avg=957.15, stdev=115.49, samples=20 00:25:58.683 iops : min= 160, max= 288, avg=239.25, stdev=28.88, samples=20 00:25:58.683 lat (msec) : 50=14.96%, 100=81.09%, 250=3.95% 00:25:58.683 cpu : usr=37.65%, sys=0.40%, ctx=1180, majf=0, minf=9 00:25:58.683 IO depths : 1=2.1%, 2=4.9%, 4=14.0%, 8=67.9%, 16=11.1%, 32=0.0%, >=64=0.0% 00:25:58.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.683 complete : 0=0.0%, 4=91.1%, 8=4.0%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.683 issued rwts: total=2406,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.683 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.683 filename1: (groupid=0, jobs=1): err= 0: pid=102179: Mon Nov 4 07:31:58 2024 00:25:58.683 read: IOPS=265, BW=1062KiB/s (1087kB/s)(10.4MiB/10005msec) 00:25:58.683 slat (usec): min=4, max=8029, avg=21.75, stdev=245.93 00:25:58.683 clat (msec): min=14, max=122, avg=60.11, stdev=17.40 00:25:58.683 lat (msec): min=14, max=122, avg=60.13, stdev=17.40 00:25:58.683 clat percentiles (msec): 00:25:58.683 | 1.00th=[ 22], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 47], 00:25:58.683 | 30.00th=[ 52], 40.00th=[ 58], 50.00th=[ 59], 60.00th=[ 61], 00:25:58.683 | 70.00th=[ 66], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 94], 00:25:58.683 | 99.00th=[ 109], 99.50th=[ 121], 99.90th=[ 124], 99.95th=[ 124], 00:25:58.683 | 99.99th=[ 124] 00:25:58.683 bw ( KiB/s): min= 744, max= 1328, per=4.24%, avg=1052.89, stdev=150.48, samples=19 00:25:58.683 iops : min= 186, max= 332, avg=263.21, stdev=37.60, samples=19 00:25:58.683 lat (msec) : 20=0.98%, 50=26.62%, 100=69.05%, 250=3.35% 00:25:58.683 cpu : usr=40.19%, sys=0.52%, ctx=1145, majf=0, minf=9 00:25:58.683 IO depths : 1=1.2%, 2=3.2%, 4=12.0%, 8=71.2%, 16=12.3%, 32=0.0%, >=64=0.0% 00:25:58.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.683 complete : 0=0.0%, 4=90.6%, 8=4.8%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.683 issued rwts: total=2656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.683 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.683 filename1: (groupid=0, jobs=1): err= 0: pid=102180: Mon Nov 4 07:31:58 2024 00:25:58.683 read: IOPS=293, BW=1173KiB/s (1201kB/s)(11.5MiB/10039msec) 00:25:58.683 slat (usec): min=4, max=3314, avg=14.22, stdev=84.97 00:25:58.683 clat (msec): min=2, max=138, avg=54.41, stdev=20.62 00:25:58.683 lat (msec): min=2, max=138, avg=54.42, stdev=20.62 00:25:58.683 clat percentiles (msec): 00:25:58.683 | 1.00th=[ 4], 5.00th=[ 31], 10.00th=[ 35], 20.00th=[ 39], 00:25:58.683 | 30.00th=[ 44], 40.00th=[ 48], 50.00th=[ 52], 60.00th=[ 58], 00:25:58.683 | 70.00th=[ 61], 80.00th=[ 70], 90.00th=[ 80], 95.00th=[ 96], 00:25:58.683 | 99.00th=[ 125], 99.50th=[ 131], 99.90th=[ 138], 99.95th=[ 138], 00:25:58.683 | 99.99th=[ 138] 00:25:58.683 bw ( KiB/s): min= 688, max= 1880, per=4.72%, avg=1172.80, stdev=270.78, samples=20 00:25:58.683 iops : min= 172, max= 470, avg=293.15, stdev=67.65, samples=20 00:25:58.683 lat (msec) : 4=1.09%, 10=0.95%, 20=1.22%, 50=45.18%, 100=48.17% 00:25:58.683 lat (msec) : 250=3.40% 00:25:58.683 cpu : usr=37.81%, sys=0.43%, ctx=1063, majf=0, minf=0 00:25:58.683 IO depths : 1=0.8%, 2=1.7%, 4=8.0%, 8=76.7%, 16=12.8%, 32=0.0%, >=64=0.0% 00:25:58.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.683 complete : 0=0.0%, 4=89.6%, 8=6.0%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.683 issued rwts: total=2944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.683 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.683 filename1: (groupid=0, jobs=1): err= 0: pid=102181: Mon Nov 4 07:31:58 2024 00:25:58.683 read: IOPS=288, BW=1154KiB/s (1181kB/s)(11.3MiB/10011msec) 00:25:58.683 slat (usec): min=4, max=6572, avg=15.68, stdev=143.34 00:25:58.683 clat (msec): min=5, max=127, avg=55.39, stdev=18.49 00:25:58.683 lat (msec): min=5, max=127, avg=55.40, stdev=18.49 00:25:58.683 clat percentiles (msec): 00:25:58.683 | 1.00th=[ 12], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 40], 00:25:58.683 | 30.00th=[ 46], 40.00th=[ 49], 50.00th=[ 56], 60.00th=[ 60], 00:25:58.683 | 70.00th=[ 63], 80.00th=[ 71], 90.00th=[ 80], 95.00th=[ 88], 00:25:58.683 | 99.00th=[ 108], 99.50th=[ 108], 99.90th=[ 128], 99.95th=[ 128], 00:25:58.683 | 99.99th=[ 128] 00:25:58.683 bw ( KiB/s): min= 784, max= 1488, per=4.63%, avg=1148.40, stdev=176.51, samples=20 00:25:58.683 iops : min= 196, max= 372, avg=287.10, stdev=44.13, samples=20 00:25:58.683 lat (msec) : 10=0.55%, 20=2.01%, 50=40.35%, 100=55.18%, 250=1.91% 00:25:58.683 cpu : usr=38.65%, sys=0.50%, ctx=1144, majf=0, minf=9 00:25:58.683 IO depths : 1=0.9%, 2=2.0%, 4=8.3%, 8=75.9%, 16=12.9%, 32=0.0%, >=64=0.0% 00:25:58.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.684 complete : 0=0.0%, 4=89.6%, 8=6.1%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.684 issued rwts: total=2887,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.684 filename1: (groupid=0, jobs=1): err= 0: pid=102182: Mon Nov 4 07:31:58 2024 00:25:58.684 read: IOPS=235, BW=942KiB/s (965kB/s)(9436KiB/10013msec) 00:25:58.684 slat (usec): min=3, max=8149, avg=18.13, stdev=186.50 00:25:58.684 clat (msec): min=24, max=155, avg=67.80, stdev=20.22 00:25:58.684 lat (msec): min=24, max=155, avg=67.82, stdev=20.22 00:25:58.684 clat percentiles (msec): 00:25:58.684 | 1.00th=[ 34], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 51], 00:25:58.684 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 71], 00:25:58.684 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 95], 95.00th=[ 108], 00:25:58.684 | 99.00th=[ 132], 99.50th=[ 138], 99.90th=[ 157], 99.95th=[ 157], 00:25:58.684 | 99.99th=[ 157] 00:25:58.684 bw ( KiB/s): min= 640, max= 1152, per=3.78%, avg=937.00, stdev=155.39, samples=20 00:25:58.684 iops : min= 160, max= 288, avg=234.20, stdev=38.80, samples=20 00:25:58.684 lat (msec) : 50=19.08%, 100=74.44%, 250=6.49% 00:25:58.684 cpu : usr=32.87%, sys=0.43%, ctx=855, majf=0, minf=9 00:25:58.684 IO depths : 1=1.6%, 2=3.5%, 4=11.2%, 8=71.8%, 16=11.9%, 32=0.0%, >=64=0.0% 00:25:58.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.684 complete : 0=0.0%, 4=90.4%, 8=4.9%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.684 issued rwts: total=2359,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.684 filename2: (groupid=0, jobs=1): err= 0: pid=102183: Mon Nov 4 07:31:58 2024 00:25:58.684 read: IOPS=273, BW=1096KiB/s (1122kB/s)(10.8MiB/10048msec) 00:25:58.684 slat (usec): min=3, max=8009, avg=18.09, stdev=195.43 00:25:58.684 clat (msec): min=24, max=118, avg=58.26, stdev=16.96 00:25:58.684 lat (msec): min=24, max=118, avg=58.28, stdev=16.96 00:25:58.684 clat percentiles (msec): 00:25:58.684 | 1.00th=[ 31], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 43], 00:25:58.684 | 30.00th=[ 47], 40.00th=[ 54], 50.00th=[ 58], 60.00th=[ 61], 00:25:58.684 | 70.00th=[ 64], 80.00th=[ 71], 90.00th=[ 82], 95.00th=[ 91], 00:25:58.684 | 99.00th=[ 111], 99.50th=[ 116], 99.90th=[ 120], 99.95th=[ 120], 00:25:58.684 | 99.99th=[ 120] 00:25:58.684 bw ( KiB/s): min= 816, max= 1408, per=4.41%, avg=1094.40, stdev=157.08, samples=20 00:25:58.684 iops : min= 204, max= 352, avg=273.60, stdev=39.27, samples=20 00:25:58.684 lat (msec) : 50=35.61%, 100=62.68%, 250=1.71% 00:25:58.684 cpu : usr=39.11%, sys=0.51%, ctx=1243, majf=0, minf=9 00:25:58.684 IO depths : 1=1.1%, 2=2.4%, 4=10.3%, 8=73.5%, 16=12.7%, 32=0.0%, >=64=0.0% 00:25:58.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.684 complete : 0=0.0%, 4=90.3%, 8=5.3%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.684 issued rwts: total=2752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.684 filename2: (groupid=0, jobs=1): err= 0: pid=102184: Mon Nov 4 07:31:58 2024 00:25:58.684 read: IOPS=273, BW=1095KiB/s (1122kB/s)(10.7MiB/10036msec) 00:25:58.684 slat (usec): min=6, max=4014, avg=13.41, stdev=76.70 00:25:58.684 clat (msec): min=5, max=121, avg=58.28, stdev=17.84 00:25:58.684 lat (msec): min=5, max=121, avg=58.29, stdev=17.83 00:25:58.684 clat percentiles (msec): 00:25:58.684 | 1.00th=[ 10], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 44], 00:25:58.684 | 30.00th=[ 48], 40.00th=[ 57], 50.00th=[ 60], 60.00th=[ 62], 00:25:58.684 | 70.00th=[ 68], 80.00th=[ 72], 90.00th=[ 82], 95.00th=[ 89], 00:25:58.684 | 99.00th=[ 100], 99.50th=[ 105], 99.90th=[ 122], 99.95th=[ 122], 00:25:58.684 | 99.99th=[ 122] 00:25:58.684 bw ( KiB/s): min= 848, max= 1405, per=4.40%, avg=1092.65, stdev=140.56, samples=20 00:25:58.684 iops : min= 212, max= 351, avg=273.15, stdev=35.11, samples=20 00:25:58.684 lat (msec) : 10=1.09%, 20=0.66%, 50=32.68%, 100=64.67%, 250=0.91% 00:25:58.684 cpu : usr=36.40%, sys=0.44%, ctx=1024, majf=0, minf=9 00:25:58.684 IO depths : 1=0.7%, 2=1.5%, 4=8.3%, 8=76.5%, 16=13.1%, 32=0.0%, >=64=0.0% 00:25:58.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.684 complete : 0=0.0%, 4=89.6%, 8=6.0%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.684 issued rwts: total=2748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.684 filename2: (groupid=0, jobs=1): err= 0: pid=102185: Mon Nov 4 07:31:58 2024 00:25:58.684 read: IOPS=243, BW=972KiB/s (996kB/s)(9728KiB/10006msec) 00:25:58.684 slat (nsec): min=4831, max=55004, avg=12572.61, stdev=8008.26 00:25:58.684 clat (msec): min=27, max=166, avg=65.75, stdev=18.81 00:25:58.684 lat (msec): min=27, max=166, avg=65.76, stdev=18.81 00:25:58.684 clat percentiles (msec): 00:25:58.684 | 1.00th=[ 32], 5.00th=[ 38], 10.00th=[ 44], 20.00th=[ 52], 00:25:58.684 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 68], 00:25:58.684 | 70.00th=[ 73], 80.00th=[ 81], 90.00th=[ 90], 95.00th=[ 96], 00:25:58.684 | 99.00th=[ 121], 99.50th=[ 131], 99.90th=[ 167], 99.95th=[ 167], 00:25:58.684 | 99.99th=[ 167] 00:25:58.684 bw ( KiB/s): min= 512, max= 1280, per=3.89%, avg=965.89, stdev=163.49, samples=19 00:25:58.684 iops : min= 128, max= 320, avg=241.47, stdev=40.87, samples=19 00:25:58.684 lat (msec) : 50=19.45%, 100=76.52%, 250=4.03% 00:25:58.684 cpu : usr=37.49%, sys=0.48%, ctx=1288, majf=0, minf=9 00:25:58.684 IO depths : 1=0.8%, 2=1.9%, 4=9.3%, 8=74.8%, 16=13.3%, 32=0.0%, >=64=0.0% 00:25:58.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.684 complete : 0=0.0%, 4=89.7%, 8=6.2%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.684 issued rwts: total=2432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.684 filename2: (groupid=0, jobs=1): err= 0: pid=102186: Mon Nov 4 07:31:58 2024 00:25:58.684 read: IOPS=283, BW=1135KiB/s (1162kB/s)(11.1MiB/10001msec) 00:25:58.684 slat (usec): min=4, max=8061, avg=19.29, stdev=226.03 00:25:58.684 clat (msec): min=23, max=118, avg=56.27, stdev=17.82 00:25:58.684 lat (msec): min=24, max=118, avg=56.29, stdev=17.83 00:25:58.684 clat percentiles (msec): 00:25:58.684 | 1.00th=[ 26], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 41], 00:25:58.684 | 30.00th=[ 46], 40.00th=[ 48], 50.00th=[ 55], 60.00th=[ 60], 00:25:58.684 | 70.00th=[ 63], 80.00th=[ 71], 90.00th=[ 84], 95.00th=[ 94], 00:25:58.684 | 99.00th=[ 107], 99.50th=[ 108], 99.90th=[ 118], 99.95th=[ 118], 00:25:58.684 | 99.99th=[ 118] 00:25:58.684 bw ( KiB/s): min= 816, max= 1504, per=4.53%, avg=1125.16, stdev=191.43, samples=19 00:25:58.684 iops : min= 204, max= 376, avg=281.26, stdev=47.86, samples=19 00:25:58.684 lat (msec) : 50=45.14%, 100=53.07%, 250=1.80% 00:25:58.684 cpu : usr=37.54%, sys=0.51%, ctx=1030, majf=0, minf=9 00:25:58.684 IO depths : 1=0.3%, 2=0.7%, 4=6.5%, 8=79.4%, 16=13.2%, 32=0.0%, >=64=0.0% 00:25:58.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.684 complete : 0=0.0%, 4=89.2%, 8=6.2%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.684 issued rwts: total=2838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.684 filename2: (groupid=0, jobs=1): err= 0: pid=102187: Mon Nov 4 07:31:58 2024 00:25:58.684 read: IOPS=268, BW=1073KiB/s (1098kB/s)(10.5MiB/10032msec) 00:25:58.684 slat (usec): min=4, max=8022, avg=27.31, stdev=271.00 00:25:58.684 clat (msec): min=15, max=130, avg=59.39, stdev=18.36 00:25:58.684 lat (msec): min=15, max=130, avg=59.42, stdev=18.36 00:25:58.684 clat percentiles (msec): 00:25:58.684 | 1.00th=[ 25], 5.00th=[ 35], 10.00th=[ 38], 20.00th=[ 45], 00:25:58.684 | 30.00th=[ 50], 40.00th=[ 55], 50.00th=[ 58], 60.00th=[ 62], 00:25:58.684 | 70.00th=[ 66], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 90], 00:25:58.684 | 99.00th=[ 121], 99.50th=[ 125], 99.90th=[ 131], 99.95th=[ 131], 00:25:58.684 | 99.99th=[ 131] 00:25:58.684 bw ( KiB/s): min= 848, max= 1408, per=4.31%, avg=1069.95, stdev=153.61, samples=20 00:25:58.684 iops : min= 212, max= 352, avg=267.40, stdev=38.40, samples=20 00:25:58.684 lat (msec) : 20=0.48%, 50=31.38%, 100=65.02%, 250=3.12% 00:25:58.684 cpu : usr=43.42%, sys=0.64%, ctx=1249, majf=0, minf=9 00:25:58.684 IO depths : 1=0.9%, 2=2.0%, 4=7.9%, 8=75.9%, 16=13.3%, 32=0.0%, >=64=0.0% 00:25:58.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.684 complete : 0=0.0%, 4=89.7%, 8=6.4%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.684 issued rwts: total=2690,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.684 filename2: (groupid=0, jobs=1): err= 0: pid=102188: Mon Nov 4 07:31:58 2024 00:25:58.684 read: IOPS=248, BW=994KiB/s (1018kB/s)(9968KiB/10024msec) 00:25:58.684 slat (usec): min=3, max=8032, avg=21.10, stdev=247.36 00:25:58.684 clat (msec): min=15, max=133, avg=64.19, stdev=19.86 00:25:58.684 lat (msec): min=15, max=133, avg=64.21, stdev=19.87 00:25:58.684 clat percentiles (msec): 00:25:58.684 | 1.00th=[ 23], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 47], 00:25:58.684 | 30.00th=[ 55], 40.00th=[ 59], 50.00th=[ 62], 60.00th=[ 65], 00:25:58.684 | 70.00th=[ 73], 80.00th=[ 85], 90.00th=[ 89], 95.00th=[ 96], 00:25:58.684 | 99.00th=[ 125], 99.50th=[ 129], 99.90th=[ 134], 99.95th=[ 134], 00:25:58.684 | 99.99th=[ 134] 00:25:58.684 bw ( KiB/s): min= 768, max= 1328, per=3.99%, avg=990.25, stdev=139.37, samples=20 00:25:58.684 iops : min= 192, max= 332, avg=247.55, stdev=34.84, samples=20 00:25:58.684 lat (msec) : 20=0.24%, 50=23.96%, 100=71.59%, 250=4.21% 00:25:58.684 cpu : usr=42.83%, sys=0.68%, ctx=1404, majf=0, minf=9 00:25:58.684 IO depths : 1=2.6%, 2=6.1%, 4=16.5%, 8=64.7%, 16=10.1%, 32=0.0%, >=64=0.0% 00:25:58.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.684 complete : 0=0.0%, 4=91.7%, 8=3.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.684 issued rwts: total=2492,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.684 filename2: (groupid=0, jobs=1): err= 0: pid=102189: Mon Nov 4 07:31:58 2024 00:25:58.684 read: IOPS=237, BW=952KiB/s (974kB/s)(9540KiB/10025msec) 00:25:58.684 slat (usec): min=5, max=8030, avg=21.46, stdev=246.32 00:25:58.684 clat (msec): min=27, max=135, avg=67.10, stdev=17.62 00:25:58.684 lat (msec): min=27, max=135, avg=67.12, stdev=17.62 00:25:58.684 clat percentiles (msec): 00:25:58.684 | 1.00th=[ 35], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 55], 00:25:58.684 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 62], 60.00th=[ 70], 00:25:58.684 | 70.00th=[ 73], 80.00th=[ 83], 90.00th=[ 92], 95.00th=[ 101], 00:25:58.684 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 136], 99.95th=[ 136], 00:25:58.684 | 99.99th=[ 136] 00:25:58.685 bw ( KiB/s): min= 768, max= 1160, per=3.82%, avg=947.65, stdev=100.95, samples=20 00:25:58.685 iops : min= 192, max= 290, avg=236.90, stdev=25.26, samples=20 00:25:58.685 lat (msec) : 50=16.73%, 100=78.24%, 250=5.03% 00:25:58.685 cpu : usr=35.23%, sys=0.45%, ctx=1013, majf=0, minf=9 00:25:58.685 IO depths : 1=2.0%, 2=4.4%, 4=13.3%, 8=69.2%, 16=11.0%, 32=0.0%, >=64=0.0% 00:25:58.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.685 complete : 0=0.0%, 4=90.7%, 8=4.2%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.685 issued rwts: total=2385,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.685 filename2: (groupid=0, jobs=1): err= 0: pid=102190: Mon Nov 4 07:31:58 2024 00:25:58.685 read: IOPS=231, BW=926KiB/s (948kB/s)(9272KiB/10012msec) 00:25:58.685 slat (usec): min=4, max=8051, avg=22.20, stdev=281.84 00:25:58.685 clat (msec): min=21, max=141, avg=68.92, stdev=20.15 00:25:58.685 lat (msec): min=21, max=141, avg=68.94, stdev=20.16 00:25:58.685 clat percentiles (msec): 00:25:58.685 | 1.00th=[ 25], 5.00th=[ 40], 10.00th=[ 46], 20.00th=[ 56], 00:25:58.685 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 72], 00:25:58.685 | 70.00th=[ 79], 80.00th=[ 85], 90.00th=[ 95], 95.00th=[ 104], 00:25:58.685 | 99.00th=[ 129], 99.50th=[ 133], 99.90th=[ 142], 99.95th=[ 142], 00:25:58.685 | 99.99th=[ 142] 00:25:58.685 bw ( KiB/s): min= 640, max= 1200, per=3.72%, avg=922.80, stdev=150.21, samples=20 00:25:58.685 iops : min= 160, max= 300, avg=230.70, stdev=37.55, samples=20 00:25:58.685 lat (msec) : 50=13.76%, 100=79.59%, 250=6.64% 00:25:58.685 cpu : usr=34.73%, sys=0.51%, ctx=1053, majf=0, minf=9 00:25:58.685 IO depths : 1=1.9%, 2=4.7%, 4=14.4%, 8=67.8%, 16=11.1%, 32=0.0%, >=64=0.0% 00:25:58.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.685 complete : 0=0.0%, 4=91.3%, 8=3.5%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.685 issued rwts: total=2318,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:58.685 00:25:58.685 Run status group 0 (all jobs): 00:25:58.685 READ: bw=24.2MiB/s (25.4MB/s), 926KiB/s-1206KiB/s (948kB/s-1235kB/s), io=243MiB (255MB), run=10001-10048msec 00:25:58.685 07:31:58 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:25:58.685 07:31:58 -- target/dif.sh@43 -- # local sub 00:25:58.685 07:31:58 -- target/dif.sh@45 -- # for sub in "$@" 00:25:58.685 07:31:58 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:58.685 07:31:58 -- target/dif.sh@36 -- # local sub_id=0 00:25:58.685 07:31:58 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:58.685 07:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.685 07:31:58 -- common/autotest_common.sh@10 -- # set +x 00:25:58.685 07:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.685 07:31:58 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:58.685 07:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.685 07:31:58 -- common/autotest_common.sh@10 -- # set +x 00:25:58.685 07:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.685 07:31:58 -- target/dif.sh@45 -- # for sub in "$@" 00:25:58.685 07:31:58 -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:58.685 07:31:58 -- target/dif.sh@36 -- # local sub_id=1 00:25:58.685 07:31:58 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:58.685 07:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.685 07:31:58 -- common/autotest_common.sh@10 -- # set +x 00:25:58.685 07:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.685 07:31:58 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:58.685 07:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.685 07:31:58 -- common/autotest_common.sh@10 -- # set +x 00:25:58.685 07:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.685 07:31:58 -- target/dif.sh@45 -- # for sub in "$@" 00:25:58.685 07:31:58 -- target/dif.sh@46 -- # destroy_subsystem 2 00:25:58.685 07:31:58 -- target/dif.sh@36 -- # local sub_id=2 00:25:58.685 07:31:58 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:58.685 07:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.685 07:31:58 -- common/autotest_common.sh@10 -- # set +x 00:25:58.685 07:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.685 07:31:58 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:25:58.685 07:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.685 07:31:58 -- common/autotest_common.sh@10 -- # set +x 00:25:58.685 07:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.685 07:31:58 -- target/dif.sh@115 -- # NULL_DIF=1 00:25:58.685 07:31:58 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:25:58.685 07:31:58 -- target/dif.sh@115 -- # numjobs=2 00:25:58.685 07:31:58 -- target/dif.sh@115 -- # iodepth=8 00:25:58.685 07:31:58 -- target/dif.sh@115 -- # runtime=5 00:25:58.685 07:31:58 -- target/dif.sh@115 -- # files=1 00:25:58.685 07:31:58 -- target/dif.sh@117 -- # create_subsystems 0 1 00:25:58.685 07:31:58 -- target/dif.sh@28 -- # local sub 00:25:58.685 07:31:58 -- target/dif.sh@30 -- # for sub in "$@" 00:25:58.685 07:31:58 -- target/dif.sh@31 -- # create_subsystem 0 00:25:58.685 07:31:58 -- target/dif.sh@18 -- # local sub_id=0 00:25:58.685 07:31:58 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:58.685 07:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.685 07:31:58 -- common/autotest_common.sh@10 -- # set +x 00:25:58.685 bdev_null0 00:25:58.685 07:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.685 07:31:58 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:58.685 07:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.685 07:31:58 -- common/autotest_common.sh@10 -- # set +x 00:25:58.685 07:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.685 07:31:58 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:58.685 07:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.685 07:31:58 -- common/autotest_common.sh@10 -- # set +x 00:25:58.685 07:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.685 07:31:58 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:58.685 07:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.685 07:31:58 -- common/autotest_common.sh@10 -- # set +x 00:25:58.685 [2024-11-04 07:31:58.788997] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:58.685 07:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.685 07:31:58 -- target/dif.sh@30 -- # for sub in "$@" 00:25:58.685 07:31:58 -- target/dif.sh@31 -- # create_subsystem 1 00:25:58.685 07:31:58 -- target/dif.sh@18 -- # local sub_id=1 00:25:58.685 07:31:58 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:58.685 07:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.685 07:31:58 -- common/autotest_common.sh@10 -- # set +x 00:25:58.685 bdev_null1 00:25:58.685 07:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.685 07:31:58 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:58.685 07:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.685 07:31:58 -- common/autotest_common.sh@10 -- # set +x 00:25:58.685 07:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.685 07:31:58 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:58.685 07:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.685 07:31:58 -- common/autotest_common.sh@10 -- # set +x 00:25:58.685 07:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.685 07:31:58 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:58.685 07:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.685 07:31:58 -- common/autotest_common.sh@10 -- # set +x 00:25:58.685 07:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.685 07:31:58 -- target/dif.sh@118 -- # fio /dev/fd/62 00:25:58.685 07:31:58 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:25:58.685 07:31:58 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:58.685 07:31:58 -- nvmf/common.sh@520 -- # config=() 00:25:58.685 07:31:58 -- nvmf/common.sh@520 -- # local subsystem config 00:25:58.685 07:31:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:58.685 07:31:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:58.685 { 00:25:58.685 "params": { 00:25:58.685 "name": "Nvme$subsystem", 00:25:58.685 "trtype": "$TEST_TRANSPORT", 00:25:58.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:58.685 "adrfam": "ipv4", 00:25:58.685 "trsvcid": "$NVMF_PORT", 00:25:58.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:58.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:58.685 "hdgst": ${hdgst:-false}, 00:25:58.685 "ddgst": ${ddgst:-false} 00:25:58.685 }, 00:25:58.685 "method": "bdev_nvme_attach_controller" 00:25:58.685 } 00:25:58.685 EOF 00:25:58.685 )") 00:25:58.685 07:31:58 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:58.685 07:31:58 -- target/dif.sh@82 -- # gen_fio_conf 00:25:58.685 07:31:58 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:58.685 07:31:58 -- target/dif.sh@54 -- # local file 00:25:58.685 07:31:58 -- target/dif.sh@56 -- # cat 00:25:58.685 07:31:58 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:25:58.685 07:31:58 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:58.685 07:31:58 -- common/autotest_common.sh@1318 -- # local sanitizers 00:25:58.685 07:31:58 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:58.685 07:31:58 -- common/autotest_common.sh@1320 -- # shift 00:25:58.685 07:31:58 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:25:58.685 07:31:58 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:58.685 07:31:58 -- nvmf/common.sh@542 -- # cat 00:25:58.685 07:31:58 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:58.685 07:31:58 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:58.685 07:31:58 -- target/dif.sh@72 -- # (( file <= files )) 00:25:58.685 07:31:58 -- target/dif.sh@73 -- # cat 00:25:58.685 07:31:58 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:58.685 07:31:58 -- common/autotest_common.sh@1324 -- # grep libasan 00:25:58.685 07:31:58 -- target/dif.sh@72 -- # (( file++ )) 00:25:58.685 07:31:58 -- target/dif.sh@72 -- # (( file <= files )) 00:25:58.685 07:31:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:58.685 07:31:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:58.685 { 00:25:58.685 "params": { 00:25:58.685 "name": "Nvme$subsystem", 00:25:58.686 "trtype": "$TEST_TRANSPORT", 00:25:58.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:58.686 "adrfam": "ipv4", 00:25:58.686 "trsvcid": "$NVMF_PORT", 00:25:58.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:58.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:58.686 "hdgst": ${hdgst:-false}, 00:25:58.686 "ddgst": ${ddgst:-false} 00:25:58.686 }, 00:25:58.686 "method": "bdev_nvme_attach_controller" 00:25:58.686 } 00:25:58.686 EOF 00:25:58.686 )") 00:25:58.686 07:31:58 -- nvmf/common.sh@542 -- # cat 00:25:58.686 07:31:58 -- nvmf/common.sh@544 -- # jq . 00:25:58.686 07:31:58 -- nvmf/common.sh@545 -- # IFS=, 00:25:58.686 07:31:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:58.686 "params": { 00:25:58.686 "name": "Nvme0", 00:25:58.686 "trtype": "tcp", 00:25:58.686 "traddr": "10.0.0.2", 00:25:58.686 "adrfam": "ipv4", 00:25:58.686 "trsvcid": "4420", 00:25:58.686 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:58.686 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:58.686 "hdgst": false, 00:25:58.686 "ddgst": false 00:25:58.686 }, 00:25:58.686 "method": "bdev_nvme_attach_controller" 00:25:58.686 },{ 00:25:58.686 "params": { 00:25:58.686 "name": "Nvme1", 00:25:58.686 "trtype": "tcp", 00:25:58.686 "traddr": "10.0.0.2", 00:25:58.686 "adrfam": "ipv4", 00:25:58.686 "trsvcid": "4420", 00:25:58.686 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:58.686 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:58.686 "hdgst": false, 00:25:58.686 "ddgst": false 00:25:58.686 }, 00:25:58.686 "method": "bdev_nvme_attach_controller" 00:25:58.686 }' 00:25:58.686 07:31:58 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:58.686 07:31:58 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:58.686 07:31:58 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:58.686 07:31:58 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:58.686 07:31:58 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:25:58.686 07:31:58 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:58.686 07:31:58 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:58.686 07:31:58 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:58.686 07:31:58 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:58.686 07:31:58 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:58.686 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:58.686 ... 00:25:58.686 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:58.686 ... 00:25:58.686 fio-3.35 00:25:58.686 Starting 4 threads 00:25:58.686 [2024-11-04 07:31:59.518955] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:58.686 [2024-11-04 07:31:59.519032] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:02.872 00:26:02.872 filename0: (groupid=0, jobs=1): err= 0: pid=102327: Mon Nov 4 07:32:04 2024 00:26:02.872 read: IOPS=2235, BW=17.5MiB/s (18.3MB/s)(87.4MiB/5004msec) 00:26:02.872 slat (nsec): min=3155, max=63759, avg=10276.43, stdev=6825.33 00:26:02.872 clat (usec): min=1026, max=8630, avg=3530.05, stdev=272.13 00:26:02.872 lat (usec): min=1033, max=8654, avg=3540.33, stdev=272.20 00:26:02.872 clat percentiles (usec): 00:26:02.872 | 1.00th=[ 2704], 5.00th=[ 3326], 10.00th=[ 3392], 20.00th=[ 3425], 00:26:02.872 | 30.00th=[ 3458], 40.00th=[ 3490], 50.00th=[ 3523], 60.00th=[ 3556], 00:26:02.872 | 70.00th=[ 3589], 80.00th=[ 3654], 90.00th=[ 3720], 95.00th=[ 3818], 00:26:02.872 | 99.00th=[ 4080], 99.50th=[ 4359], 99.90th=[ 6390], 99.95th=[ 7767], 00:26:02.872 | 99.99th=[ 7832] 00:26:02.872 bw ( KiB/s): min=17456, max=18224, per=25.09%, avg=17884.44, stdev=280.32, samples=9 00:26:02.872 iops : min= 2182, max= 2278, avg=2235.56, stdev=35.04, samples=9 00:26:02.872 lat (msec) : 2=0.54%, 4=98.00%, 10=1.47% 00:26:02.872 cpu : usr=95.14%, sys=3.72%, ctx=9, majf=0, minf=0 00:26:02.872 IO depths : 1=7.0%, 2=21.2%, 4=53.6%, 8=18.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:02.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.872 complete : 0=0.0%, 4=89.5%, 8=10.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.872 issued rwts: total=11187,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.872 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:02.872 filename0: (groupid=0, jobs=1): err= 0: pid=102328: Mon Nov 4 07:32:04 2024 00:26:02.872 read: IOPS=2225, BW=17.4MiB/s (18.2MB/s)(86.9MiB/5001msec) 00:26:02.872 slat (nsec): min=6078, max=87542, avg=14128.55, stdev=6493.88 00:26:02.872 clat (usec): min=1567, max=5882, avg=3529.60, stdev=211.16 00:26:02.872 lat (usec): min=1577, max=5934, avg=3543.73, stdev=211.62 00:26:02.872 clat percentiles (usec): 00:26:02.872 | 1.00th=[ 2802], 5.00th=[ 3326], 10.00th=[ 3392], 20.00th=[ 3425], 00:26:02.872 | 30.00th=[ 3458], 40.00th=[ 3490], 50.00th=[ 3490], 60.00th=[ 3523], 00:26:02.872 | 70.00th=[ 3589], 80.00th=[ 3621], 90.00th=[ 3720], 95.00th=[ 3785], 00:26:02.872 | 99.00th=[ 4293], 99.50th=[ 4555], 99.90th=[ 5276], 99.95th=[ 5342], 00:26:02.872 | 99.99th=[ 5866] 00:26:02.872 bw ( KiB/s): min=17280, max=18176, per=24.98%, avg=17806.22, stdev=282.21, samples=9 00:26:02.872 iops : min= 2160, max= 2272, avg=2225.78, stdev=35.28, samples=9 00:26:02.872 lat (msec) : 2=0.04%, 4=97.92%, 10=2.03% 00:26:02.872 cpu : usr=94.08%, sys=4.54%, ctx=6, majf=0, minf=9 00:26:02.872 IO depths : 1=8.3%, 2=25.0%, 4=50.0%, 8=16.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:02.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.872 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.872 issued rwts: total=11128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.872 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:02.872 filename1: (groupid=0, jobs=1): err= 0: pid=102329: Mon Nov 4 07:32:04 2024 00:26:02.872 read: IOPS=2227, BW=17.4MiB/s (18.2MB/s)(87.1MiB/5003msec) 00:26:02.872 slat (usec): min=6, max=284, avg=12.82, stdev= 8.14 00:26:02.872 clat (usec): min=834, max=6244, avg=3534.43, stdev=261.10 00:26:02.872 lat (usec): min=844, max=6251, avg=3547.26, stdev=261.29 00:26:02.872 clat percentiles (usec): 00:26:02.872 | 1.00th=[ 2737], 5.00th=[ 3326], 10.00th=[ 3392], 20.00th=[ 3425], 00:26:02.872 | 30.00th=[ 3458], 40.00th=[ 3490], 50.00th=[ 3523], 60.00th=[ 3556], 00:26:02.872 | 70.00th=[ 3589], 80.00th=[ 3654], 90.00th=[ 3720], 95.00th=[ 3851], 00:26:02.872 | 99.00th=[ 4359], 99.50th=[ 4752], 99.90th=[ 5604], 99.95th=[ 5866], 00:26:02.872 | 99.99th=[ 5932] 00:26:02.872 bw ( KiB/s): min=17408, max=18224, per=25.00%, avg=17818.67, stdev=282.84, samples=9 00:26:02.872 iops : min= 2176, max= 2278, avg=2227.33, stdev=35.36, samples=9 00:26:02.872 lat (usec) : 1000=0.04% 00:26:02.872 lat (msec) : 2=0.34%, 4=96.63%, 10=3.00% 00:26:02.872 cpu : usr=94.72%, sys=3.66%, ctx=78, majf=0, minf=9 00:26:02.872 IO depths : 1=7.1%, 2=18.6%, 4=56.3%, 8=18.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:02.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.872 complete : 0=0.0%, 4=89.5%, 8=10.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.872 issued rwts: total=11143,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.872 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:02.872 filename1: (groupid=0, jobs=1): err= 0: pid=102330: Mon Nov 4 07:32:04 2024 00:26:02.872 read: IOPS=2225, BW=17.4MiB/s (18.2MB/s)(86.9MiB/5001msec) 00:26:02.872 slat (nsec): min=6210, max=86893, avg=14192.78, stdev=6606.21 00:26:02.872 clat (usec): min=1380, max=7856, avg=3529.65, stdev=306.83 00:26:02.872 lat (usec): min=1390, max=7877, avg=3543.84, stdev=307.12 00:26:02.872 clat percentiles (usec): 00:26:02.873 | 1.00th=[ 2507], 5.00th=[ 3326], 10.00th=[ 3359], 20.00th=[ 3425], 00:26:02.873 | 30.00th=[ 3458], 40.00th=[ 3490], 50.00th=[ 3490], 60.00th=[ 3523], 00:26:02.873 | 70.00th=[ 3589], 80.00th=[ 3621], 90.00th=[ 3720], 95.00th=[ 3818], 00:26:02.873 | 99.00th=[ 4817], 99.50th=[ 5407], 99.90th=[ 5932], 99.95th=[ 6063], 00:26:02.873 | 99.99th=[ 6587] 00:26:02.873 bw ( KiB/s): min=17314, max=18192, per=24.99%, avg=17810.00, stdev=280.18, samples=9 00:26:02.873 iops : min= 2164, max= 2274, avg=2226.22, stdev=35.08, samples=9 00:26:02.873 lat (msec) : 2=0.28%, 4=97.01%, 10=2.71% 00:26:02.873 cpu : usr=95.32%, sys=3.38%, ctx=4, majf=0, minf=9 00:26:02.873 IO depths : 1=5.6%, 2=25.0%, 4=50.0%, 8=19.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:02.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.873 complete : 0=0.0%, 4=89.5%, 8=10.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.873 issued rwts: total=11128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.873 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:02.873 00:26:02.873 Run status group 0 (all jobs): 00:26:02.873 READ: bw=69.6MiB/s (73.0MB/s), 17.4MiB/s-17.5MiB/s (18.2MB/s-18.3MB/s), io=348MiB (365MB), run=5001-5004msec 00:26:03.132 07:32:04 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:03.132 07:32:04 -- target/dif.sh@43 -- # local sub 00:26:03.132 07:32:04 -- target/dif.sh@45 -- # for sub in "$@" 00:26:03.132 07:32:04 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:03.132 07:32:04 -- target/dif.sh@36 -- # local sub_id=0 00:26:03.132 07:32:04 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:03.132 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:03.132 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:26:03.132 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:03.132 07:32:04 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:03.132 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:03.132 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:26:03.132 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:03.132 07:32:04 -- target/dif.sh@45 -- # for sub in "$@" 00:26:03.132 07:32:04 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:03.132 07:32:04 -- target/dif.sh@36 -- # local sub_id=1 00:26:03.132 07:32:04 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:03.132 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:03.132 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:26:03.132 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:03.132 07:32:04 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:03.132 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:03.132 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:26:03.132 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:03.132 00:26:03.132 real 0m23.539s 00:26:03.132 user 2m7.923s 00:26:03.132 sys 0m3.389s 00:26:03.132 07:32:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:03.132 ************************************ 00:26:03.132 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:26:03.132 END TEST fio_dif_rand_params 00:26:03.132 ************************************ 00:26:03.132 07:32:04 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:03.132 07:32:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:03.132 07:32:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:03.132 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:26:03.132 ************************************ 00:26:03.132 START TEST fio_dif_digest 00:26:03.132 ************************************ 00:26:03.132 07:32:04 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:26:03.132 07:32:04 -- target/dif.sh@123 -- # local NULL_DIF 00:26:03.132 07:32:04 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:03.132 07:32:04 -- target/dif.sh@125 -- # local hdgst ddgst 00:26:03.132 07:32:04 -- target/dif.sh@127 -- # NULL_DIF=3 00:26:03.132 07:32:04 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:03.132 07:32:04 -- target/dif.sh@127 -- # numjobs=3 00:26:03.132 07:32:04 -- target/dif.sh@127 -- # iodepth=3 00:26:03.132 07:32:04 -- target/dif.sh@127 -- # runtime=10 00:26:03.132 07:32:04 -- target/dif.sh@128 -- # hdgst=true 00:26:03.132 07:32:04 -- target/dif.sh@128 -- # ddgst=true 00:26:03.132 07:32:04 -- target/dif.sh@130 -- # create_subsystems 0 00:26:03.132 07:32:04 -- target/dif.sh@28 -- # local sub 00:26:03.132 07:32:04 -- target/dif.sh@30 -- # for sub in "$@" 00:26:03.132 07:32:04 -- target/dif.sh@31 -- # create_subsystem 0 00:26:03.132 07:32:04 -- target/dif.sh@18 -- # local sub_id=0 00:26:03.132 07:32:04 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:03.132 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:03.132 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:26:03.132 bdev_null0 00:26:03.132 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:03.132 07:32:04 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:03.132 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:03.132 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:26:03.391 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:03.391 07:32:04 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:03.391 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:03.391 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:26:03.391 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:03.391 07:32:04 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:03.391 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:03.391 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:26:03.391 [2024-11-04 07:32:04.998293] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:03.391 07:32:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:03.391 07:32:05 -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:03.391 07:32:05 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:03.391 07:32:05 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:03.391 07:32:05 -- nvmf/common.sh@520 -- # config=() 00:26:03.391 07:32:05 -- nvmf/common.sh@520 -- # local subsystem config 00:26:03.391 07:32:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:03.391 07:32:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:03.391 { 00:26:03.391 "params": { 00:26:03.391 "name": "Nvme$subsystem", 00:26:03.391 "trtype": "$TEST_TRANSPORT", 00:26:03.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:03.391 "adrfam": "ipv4", 00:26:03.391 "trsvcid": "$NVMF_PORT", 00:26:03.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:03.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:03.391 "hdgst": ${hdgst:-false}, 00:26:03.391 "ddgst": ${ddgst:-false} 00:26:03.391 }, 00:26:03.391 "method": "bdev_nvme_attach_controller" 00:26:03.391 } 00:26:03.391 EOF 00:26:03.391 )") 00:26:03.391 07:32:05 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:03.391 07:32:05 -- target/dif.sh@82 -- # gen_fio_conf 00:26:03.391 07:32:05 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:03.391 07:32:05 -- target/dif.sh@54 -- # local file 00:26:03.391 07:32:05 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:26:03.391 07:32:05 -- target/dif.sh@56 -- # cat 00:26:03.391 07:32:05 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:03.391 07:32:05 -- common/autotest_common.sh@1318 -- # local sanitizers 00:26:03.391 07:32:05 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:03.391 07:32:05 -- common/autotest_common.sh@1320 -- # shift 00:26:03.391 07:32:05 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:26:03.391 07:32:05 -- nvmf/common.sh@542 -- # cat 00:26:03.391 07:32:05 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:03.391 07:32:05 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:03.391 07:32:05 -- target/dif.sh@72 -- # (( file <= files )) 00:26:03.391 07:32:05 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:03.391 07:32:05 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:03.391 07:32:05 -- common/autotest_common.sh@1324 -- # grep libasan 00:26:03.391 07:32:05 -- nvmf/common.sh@544 -- # jq . 00:26:03.391 07:32:05 -- nvmf/common.sh@545 -- # IFS=, 00:26:03.391 07:32:05 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:03.391 "params": { 00:26:03.391 "name": "Nvme0", 00:26:03.391 "trtype": "tcp", 00:26:03.391 "traddr": "10.0.0.2", 00:26:03.391 "adrfam": "ipv4", 00:26:03.391 "trsvcid": "4420", 00:26:03.391 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:03.391 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:03.391 "hdgst": true, 00:26:03.391 "ddgst": true 00:26:03.391 }, 00:26:03.391 "method": "bdev_nvme_attach_controller" 00:26:03.391 }' 00:26:03.391 07:32:05 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:03.391 07:32:05 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:03.391 07:32:05 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:03.391 07:32:05 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:26:03.391 07:32:05 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:03.391 07:32:05 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:03.391 07:32:05 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:03.391 07:32:05 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:03.391 07:32:05 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:03.391 07:32:05 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:03.391 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:03.391 ... 00:26:03.391 fio-3.35 00:26:03.391 Starting 3 threads 00:26:03.958 [2024-11-04 07:32:05.600776] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:03.958 [2024-11-04 07:32:05.600864] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:13.935 00:26:13.935 filename0: (groupid=0, jobs=1): err= 0: pid=102436: Mon Nov 4 07:32:15 2024 00:26:13.935 read: IOPS=250, BW=31.3MiB/s (32.8MB/s)(313MiB/10007msec) 00:26:13.935 slat (nsec): min=6275, max=70496, avg=17288.03, stdev=6376.62 00:26:13.935 clat (usec): min=7098, max=52867, avg=11973.90, stdev=8060.98 00:26:13.935 lat (usec): min=7107, max=52886, avg=11991.19, stdev=8061.11 00:26:13.935 clat percentiles (usec): 00:26:13.935 | 1.00th=[ 8586], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9765], 00:26:13.935 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10421], 60.00th=[10552], 00:26:13.935 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11338], 95.00th=[12125], 00:26:13.935 | 99.00th=[51643], 99.50th=[51643], 99.90th=[52691], 99.95th=[52691], 00:26:13.935 | 99.99th=[52691] 00:26:13.935 bw ( KiB/s): min=24832, max=38912, per=34.11%, avg=32100.74, stdev=3749.45, samples=19 00:26:13.935 iops : min= 194, max= 304, avg=250.74, stdev=29.28, samples=19 00:26:13.935 lat (msec) : 10=31.16%, 20=64.76%, 50=0.68%, 100=3.40% 00:26:13.935 cpu : usr=94.85%, sys=3.78%, ctx=18, majf=0, minf=9 00:26:13.935 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:13.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.935 issued rwts: total=2503,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.935 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:13.935 filename0: (groupid=0, jobs=1): err= 0: pid=102437: Mon Nov 4 07:32:15 2024 00:26:13.935 read: IOPS=218, BW=27.3MiB/s (28.6MB/s)(274MiB/10046msec) 00:26:13.935 slat (nsec): min=6157, max=69735, avg=14477.65, stdev=5877.15 00:26:13.935 clat (usec): min=7822, max=47522, avg=13719.11, stdev=2254.53 00:26:13.935 lat (usec): min=7833, max=47540, avg=13733.59, stdev=2255.62 00:26:13.935 clat percentiles (usec): 00:26:13.935 | 1.00th=[ 8848], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[12256], 00:26:13.935 | 30.00th=[13698], 40.00th=[14091], 50.00th=[14353], 60.00th=[14615], 00:26:13.935 | 70.00th=[14877], 80.00th=[15139], 90.00th=[15664], 95.00th=[15926], 00:26:13.935 | 99.00th=[16581], 99.50th=[16909], 99.90th=[19792], 99.95th=[45351], 00:26:13.935 | 99.99th=[47449] 00:26:13.935 bw ( KiB/s): min=25600, max=29952, per=29.77%, avg=28022.26, stdev=1283.62, samples=19 00:26:13.935 iops : min= 200, max= 234, avg=218.89, stdev=10.03, samples=19 00:26:13.935 lat (msec) : 10=10.77%, 20=89.14%, 50=0.09% 00:26:13.935 cpu : usr=94.15%, sys=4.31%, ctx=7, majf=0, minf=9 00:26:13.935 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:13.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.935 issued rwts: total=2191,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.935 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:13.935 filename0: (groupid=0, jobs=1): err= 0: pid=102438: Mon Nov 4 07:32:15 2024 00:26:13.935 read: IOPS=269, BW=33.7MiB/s (35.3MB/s)(337MiB/10003msec) 00:26:13.935 slat (nsec): min=6063, max=70501, avg=13431.64, stdev=6145.81 00:26:13.935 clat (usec): min=5872, max=17107, avg=11124.55, stdev=2028.93 00:26:13.935 lat (usec): min=5898, max=17125, avg=11137.98, stdev=2030.21 00:26:13.935 clat percentiles (usec): 00:26:13.935 | 1.00th=[ 6456], 5.00th=[ 7046], 10.00th=[ 7439], 20.00th=[ 9372], 00:26:13.935 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11731], 60.00th=[11994], 00:26:13.935 | 70.00th=[12256], 80.00th=[12649], 90.00th=[13042], 95.00th=[13435], 00:26:13.935 | 99.00th=[14222], 99.50th=[14484], 99.90th=[15533], 99.95th=[17171], 00:26:13.935 | 99.99th=[17171] 00:26:13.935 bw ( KiB/s): min=31232, max=38656, per=36.60%, avg=34448.68, stdev=1816.79, samples=19 00:26:13.935 iops : min= 244, max= 302, avg=269.11, stdev=14.21, samples=19 00:26:13.935 lat (msec) : 10=21.65%, 20=78.35% 00:26:13.935 cpu : usr=93.82%, sys=4.50%, ctx=8, majf=0, minf=9 00:26:13.935 IO depths : 1=1.8%, 2=98.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:13.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.935 issued rwts: total=2693,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.935 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:13.935 00:26:13.935 Run status group 0 (all jobs): 00:26:13.935 READ: bw=91.9MiB/s (96.4MB/s), 27.3MiB/s-33.7MiB/s (28.6MB/s-35.3MB/s), io=923MiB (968MB), run=10003-10046msec 00:26:14.225 07:32:15 -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:14.225 07:32:15 -- target/dif.sh@43 -- # local sub 00:26:14.225 07:32:15 -- target/dif.sh@45 -- # for sub in "$@" 00:26:14.225 07:32:15 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:14.225 07:32:15 -- target/dif.sh@36 -- # local sub_id=0 00:26:14.225 07:32:15 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:14.225 07:32:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:14.225 07:32:15 -- common/autotest_common.sh@10 -- # set +x 00:26:14.225 07:32:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:14.225 07:32:15 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:14.225 07:32:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:14.225 07:32:15 -- common/autotest_common.sh@10 -- # set +x 00:26:14.225 07:32:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:14.225 00:26:14.225 real 0m11.043s 00:26:14.225 user 0m29.008s 00:26:14.225 sys 0m1.532s 00:26:14.225 07:32:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:14.225 ************************************ 00:26:14.225 END TEST fio_dif_digest 00:26:14.225 07:32:16 -- common/autotest_common.sh@10 -- # set +x 00:26:14.225 ************************************ 00:26:14.225 07:32:16 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:14.225 07:32:16 -- target/dif.sh@147 -- # nvmftestfini 00:26:14.225 07:32:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:14.225 07:32:16 -- nvmf/common.sh@116 -- # sync 00:26:14.483 07:32:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:14.483 07:32:16 -- nvmf/common.sh@119 -- # set +e 00:26:14.483 07:32:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:14.483 07:32:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:14.483 rmmod nvme_tcp 00:26:14.483 rmmod nvme_fabrics 00:26:14.483 rmmod nvme_keyring 00:26:14.483 07:32:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:14.483 07:32:16 -- nvmf/common.sh@123 -- # set -e 00:26:14.483 07:32:16 -- nvmf/common.sh@124 -- # return 0 00:26:14.483 07:32:16 -- nvmf/common.sh@477 -- # '[' -n 101666 ']' 00:26:14.483 07:32:16 -- nvmf/common.sh@478 -- # killprocess 101666 00:26:14.483 07:32:16 -- common/autotest_common.sh@926 -- # '[' -z 101666 ']' 00:26:14.483 07:32:16 -- common/autotest_common.sh@930 -- # kill -0 101666 00:26:14.483 07:32:16 -- common/autotest_common.sh@931 -- # uname 00:26:14.483 07:32:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:14.483 07:32:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 101666 00:26:14.483 07:32:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:14.483 07:32:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:14.483 killing process with pid 101666 00:26:14.483 07:32:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 101666' 00:26:14.483 07:32:16 -- common/autotest_common.sh@945 -- # kill 101666 00:26:14.483 07:32:16 -- common/autotest_common.sh@950 -- # wait 101666 00:26:14.742 07:32:16 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:26:14.742 07:32:16 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:15.001 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:15.001 Waiting for block devices as requested 00:26:15.259 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:15.259 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:15.259 07:32:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:15.259 07:32:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:15.259 07:32:17 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:15.259 07:32:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:15.259 07:32:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:15.259 07:32:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:15.259 07:32:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.259 07:32:17 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:15.259 00:26:15.259 real 1m0.031s 00:26:15.259 user 3m51.481s 00:26:15.259 sys 0m14.143s 00:26:15.259 07:32:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:15.259 07:32:17 -- common/autotest_common.sh@10 -- # set +x 00:26:15.259 ************************************ 00:26:15.259 END TEST nvmf_dif 00:26:15.259 ************************************ 00:26:15.518 07:32:17 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:15.518 07:32:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:15.518 07:32:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:15.518 07:32:17 -- common/autotest_common.sh@10 -- # set +x 00:26:15.518 ************************************ 00:26:15.518 START TEST nvmf_abort_qd_sizes 00:26:15.518 ************************************ 00:26:15.518 07:32:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:15.518 * Looking for test storage... 00:26:15.518 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:15.518 07:32:17 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:15.518 07:32:17 -- nvmf/common.sh@7 -- # uname -s 00:26:15.518 07:32:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:15.518 07:32:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:15.518 07:32:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:15.518 07:32:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:15.518 07:32:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:15.518 07:32:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:15.518 07:32:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:15.518 07:32:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:15.518 07:32:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:15.518 07:32:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:15.518 07:32:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:26:15.518 07:32:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a 00:26:15.518 07:32:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:15.518 07:32:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:15.518 07:32:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:15.518 07:32:17 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:15.518 07:32:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:15.518 07:32:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:15.518 07:32:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:15.518 07:32:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.518 07:32:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.518 07:32:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.518 07:32:17 -- paths/export.sh@5 -- # export PATH 00:26:15.518 07:32:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.518 07:32:17 -- nvmf/common.sh@46 -- # : 0 00:26:15.518 07:32:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:15.518 07:32:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:15.518 07:32:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:15.518 07:32:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:15.518 07:32:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:15.518 07:32:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:15.518 07:32:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:15.518 07:32:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:15.518 07:32:17 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:26:15.518 07:32:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:15.518 07:32:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:15.518 07:32:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:15.518 07:32:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:15.518 07:32:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:15.518 07:32:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:15.518 07:32:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:15.518 07:32:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.518 07:32:17 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:15.518 07:32:17 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:15.518 07:32:17 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:15.518 07:32:17 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:15.518 07:32:17 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:15.518 07:32:17 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:15.518 07:32:17 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:15.518 07:32:17 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:15.518 07:32:17 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:15.518 07:32:17 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:15.518 07:32:17 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:15.518 07:32:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:15.518 07:32:17 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:15.518 07:32:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:15.518 07:32:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:15.518 07:32:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:15.518 07:32:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:15.518 07:32:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:15.518 07:32:17 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:15.518 07:32:17 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:15.518 Cannot find device "nvmf_tgt_br" 00:26:15.518 07:32:17 -- nvmf/common.sh@154 -- # true 00:26:15.518 07:32:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:15.518 Cannot find device "nvmf_tgt_br2" 00:26:15.518 07:32:17 -- nvmf/common.sh@155 -- # true 00:26:15.518 07:32:17 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:15.518 07:32:17 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:15.518 Cannot find device "nvmf_tgt_br" 00:26:15.518 07:32:17 -- nvmf/common.sh@157 -- # true 00:26:15.518 07:32:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:15.518 Cannot find device "nvmf_tgt_br2" 00:26:15.518 07:32:17 -- nvmf/common.sh@158 -- # true 00:26:15.518 07:32:17 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:15.777 07:32:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:15.777 07:32:17 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:15.777 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:15.777 07:32:17 -- nvmf/common.sh@161 -- # true 00:26:15.777 07:32:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:15.777 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:15.777 07:32:17 -- nvmf/common.sh@162 -- # true 00:26:15.777 07:32:17 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:15.777 07:32:17 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:15.777 07:32:17 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:15.777 07:32:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:15.777 07:32:17 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:15.777 07:32:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:15.777 07:32:17 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:15.777 07:32:17 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:15.777 07:32:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:15.777 07:32:17 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:15.777 07:32:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:15.777 07:32:17 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:15.777 07:32:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:15.777 07:32:17 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:15.777 07:32:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:15.777 07:32:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:15.777 07:32:17 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:15.777 07:32:17 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:15.777 07:32:17 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:15.777 07:32:17 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:15.777 07:32:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:15.777 07:32:17 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:15.777 07:32:17 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:15.777 07:32:17 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:15.777 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:15.777 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:26:15.777 00:26:15.777 --- 10.0.0.2 ping statistics --- 00:26:15.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:15.777 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:26:15.777 07:32:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:15.777 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:15.777 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:26:15.777 00:26:15.777 --- 10.0.0.3 ping statistics --- 00:26:15.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:15.777 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:26:15.777 07:32:17 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:15.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:15.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:26:15.778 00:26:15.778 --- 10.0.0.1 ping statistics --- 00:26:15.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:15.778 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:26:15.778 07:32:17 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:15.778 07:32:17 -- nvmf/common.sh@421 -- # return 0 00:26:15.778 07:32:17 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:26:15.778 07:32:17 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:16.712 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:16.712 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:26:16.712 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:26:16.712 07:32:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:16.712 07:32:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:16.712 07:32:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:16.712 07:32:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:16.712 07:32:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:16.712 07:32:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:16.712 07:32:18 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:26:16.712 07:32:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:16.712 07:32:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:16.712 07:32:18 -- common/autotest_common.sh@10 -- # set +x 00:26:16.712 07:32:18 -- nvmf/common.sh@469 -- # nvmfpid=103034 00:26:16.712 07:32:18 -- nvmf/common.sh@470 -- # waitforlisten 103034 00:26:16.712 07:32:18 -- common/autotest_common.sh@819 -- # '[' -z 103034 ']' 00:26:16.712 07:32:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:16.712 07:32:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:16.713 07:32:18 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:16.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:16.713 07:32:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:16.713 07:32:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:16.713 07:32:18 -- common/autotest_common.sh@10 -- # set +x 00:26:16.971 [2024-11-04 07:32:18.590584] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:26:16.971 [2024-11-04 07:32:18.590694] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:16.971 [2024-11-04 07:32:18.735801] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:17.229 [2024-11-04 07:32:18.815261] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:17.229 [2024-11-04 07:32:18.815445] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:17.229 [2024-11-04 07:32:18.815463] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:17.229 [2024-11-04 07:32:18.815476] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:17.229 [2024-11-04 07:32:18.815657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.229 [2024-11-04 07:32:18.815798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:17.229 [2024-11-04 07:32:18.816199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:17.229 [2024-11-04 07:32:18.816208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.796 07:32:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:17.796 07:32:19 -- common/autotest_common.sh@852 -- # return 0 00:26:17.796 07:32:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:17.796 07:32:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:17.796 07:32:19 -- common/autotest_common.sh@10 -- # set +x 00:26:18.053 07:32:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:18.053 07:32:19 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:18.053 07:32:19 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:26:18.053 07:32:19 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:26:18.053 07:32:19 -- scripts/common.sh@311 -- # local bdf bdfs 00:26:18.053 07:32:19 -- scripts/common.sh@312 -- # local nvmes 00:26:18.053 07:32:19 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:26:18.053 07:32:19 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:26:18.053 07:32:19 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:26:18.053 07:32:19 -- scripts/common.sh@297 -- # local bdf= 00:26:18.053 07:32:19 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:26:18.053 07:32:19 -- scripts/common.sh@232 -- # local class 00:26:18.053 07:32:19 -- scripts/common.sh@233 -- # local subclass 00:26:18.053 07:32:19 -- scripts/common.sh@234 -- # local progif 00:26:18.053 07:32:19 -- scripts/common.sh@235 -- # printf %02x 1 00:26:18.053 07:32:19 -- scripts/common.sh@235 -- # class=01 00:26:18.053 07:32:19 -- scripts/common.sh@236 -- # printf %02x 8 00:26:18.053 07:32:19 -- scripts/common.sh@236 -- # subclass=08 00:26:18.053 07:32:19 -- scripts/common.sh@237 -- # printf %02x 2 00:26:18.053 07:32:19 -- scripts/common.sh@237 -- # progif=02 00:26:18.053 07:32:19 -- scripts/common.sh@239 -- # hash lspci 00:26:18.053 07:32:19 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:26:18.053 07:32:19 -- scripts/common.sh@242 -- # grep -i -- -p02 00:26:18.053 07:32:19 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:26:18.053 07:32:19 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:26:18.053 07:32:19 -- scripts/common.sh@244 -- # tr -d '"' 00:26:18.053 07:32:19 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:18.053 07:32:19 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:26:18.053 07:32:19 -- scripts/common.sh@15 -- # local i 00:26:18.053 07:32:19 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:26:18.053 07:32:19 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:18.053 07:32:19 -- scripts/common.sh@24 -- # return 0 00:26:18.053 07:32:19 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:26:18.053 07:32:19 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:18.053 07:32:19 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:26:18.053 07:32:19 -- scripts/common.sh@15 -- # local i 00:26:18.053 07:32:19 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:26:18.053 07:32:19 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:18.053 07:32:19 -- scripts/common.sh@24 -- # return 0 00:26:18.053 07:32:19 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:26:18.054 07:32:19 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:18.054 07:32:19 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:26:18.054 07:32:19 -- scripts/common.sh@322 -- # uname -s 00:26:18.054 07:32:19 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:18.054 07:32:19 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:18.054 07:32:19 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:18.054 07:32:19 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:26:18.054 07:32:19 -- scripts/common.sh@322 -- # uname -s 00:26:18.054 07:32:19 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:18.054 07:32:19 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:18.054 07:32:19 -- scripts/common.sh@327 -- # (( 2 )) 00:26:18.054 07:32:19 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:26:18.054 07:32:19 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:26:18.054 07:32:19 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:26:18.054 07:32:19 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:26:18.054 07:32:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:18.054 07:32:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:18.054 07:32:19 -- common/autotest_common.sh@10 -- # set +x 00:26:18.054 ************************************ 00:26:18.054 START TEST spdk_target_abort 00:26:18.054 ************************************ 00:26:18.054 07:32:19 -- common/autotest_common.sh@1104 -- # spdk_target 00:26:18.054 07:32:19 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:18.054 07:32:19 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:18.054 07:32:19 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:26:18.054 07:32:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:18.054 07:32:19 -- common/autotest_common.sh@10 -- # set +x 00:26:18.054 spdk_targetn1 00:26:18.054 07:32:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:18.054 07:32:19 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:18.054 07:32:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:18.054 07:32:19 -- common/autotest_common.sh@10 -- # set +x 00:26:18.054 [2024-11-04 07:32:19.795040] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:18.054 07:32:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:18.054 07:32:19 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:26:18.054 07:32:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:18.054 07:32:19 -- common/autotest_common.sh@10 -- # set +x 00:26:18.054 07:32:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:18.054 07:32:19 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:26:18.054 07:32:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:18.054 07:32:19 -- common/autotest_common.sh@10 -- # set +x 00:26:18.054 07:32:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:18.054 07:32:19 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:26:18.054 07:32:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:18.054 07:32:19 -- common/autotest_common.sh@10 -- # set +x 00:26:18.054 [2024-11-04 07:32:19.823249] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:18.054 07:32:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:18.054 07:32:19 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:26:18.054 07:32:19 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:18.054 07:32:19 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:18.054 07:32:19 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:18.054 07:32:19 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:18.054 07:32:19 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:18.054 07:32:19 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:18.054 07:32:19 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:18.054 07:32:19 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:18.054 07:32:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:18.054 07:32:19 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:18.054 07:32:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:18.054 07:32:19 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:18.054 07:32:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:18.054 07:32:19 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:18.054 07:32:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:18.054 07:32:19 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:18.054 07:32:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:18.054 07:32:19 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:18.054 07:32:19 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:18.054 07:32:19 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:21.336 Initializing NVMe Controllers 00:26:21.337 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:21.337 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:21.337 Initialization complete. Launching workers. 00:26:21.337 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 11502, failed: 0 00:26:21.337 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1170, failed to submit 10332 00:26:21.337 success 742, unsuccess 428, failed 0 00:26:21.337 07:32:23 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:21.337 07:32:23 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:24.620 Initializing NVMe Controllers 00:26:24.620 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:24.620 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:24.620 Initialization complete. Launching workers. 00:26:24.620 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 5951, failed: 0 00:26:24.620 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1249, failed to submit 4702 00:26:24.620 success 261, unsuccess 988, failed 0 00:26:24.620 07:32:26 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:24.621 07:32:26 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:27.906 Initializing NVMe Controllers 00:26:27.906 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:27.906 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:27.906 Initialization complete. Launching workers. 00:26:27.906 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 33171, failed: 0 00:26:27.906 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2716, failed to submit 30455 00:26:27.906 success 510, unsuccess 2206, failed 0 00:26:27.906 07:32:29 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:26:27.906 07:32:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:27.906 07:32:29 -- common/autotest_common.sh@10 -- # set +x 00:26:27.906 07:32:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:27.906 07:32:29 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:26:27.906 07:32:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:27.906 07:32:29 -- common/autotest_common.sh@10 -- # set +x 00:26:28.473 07:32:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:28.473 07:32:30 -- target/abort_qd_sizes.sh@62 -- # killprocess 103034 00:26:28.473 07:32:30 -- common/autotest_common.sh@926 -- # '[' -z 103034 ']' 00:26:28.473 07:32:30 -- common/autotest_common.sh@930 -- # kill -0 103034 00:26:28.473 07:32:30 -- common/autotest_common.sh@931 -- # uname 00:26:28.473 07:32:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:28.473 07:32:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 103034 00:26:28.473 07:32:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:28.473 07:32:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:28.473 killing process with pid 103034 00:26:28.473 07:32:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 103034' 00:26:28.473 07:32:30 -- common/autotest_common.sh@945 -- # kill 103034 00:26:28.473 07:32:30 -- common/autotest_common.sh@950 -- # wait 103034 00:26:28.731 00:26:28.731 real 0m10.623s 00:26:28.731 user 0m43.670s 00:26:28.731 sys 0m1.761s 00:26:28.731 07:32:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:28.731 07:32:30 -- common/autotest_common.sh@10 -- # set +x 00:26:28.731 ************************************ 00:26:28.731 END TEST spdk_target_abort 00:26:28.731 ************************************ 00:26:28.731 07:32:30 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:26:28.731 07:32:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:28.731 07:32:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:28.731 07:32:30 -- common/autotest_common.sh@10 -- # set +x 00:26:28.731 ************************************ 00:26:28.731 START TEST kernel_target_abort 00:26:28.731 ************************************ 00:26:28.731 07:32:30 -- common/autotest_common.sh@1104 -- # kernel_target 00:26:28.731 07:32:30 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:26:28.731 07:32:30 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:26:28.731 07:32:30 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:26:28.731 07:32:30 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:26:28.731 07:32:30 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:26:28.731 07:32:30 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:28.731 07:32:30 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:28.731 07:32:30 -- nvmf/common.sh@627 -- # local block nvme 00:26:28.731 07:32:30 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:26:28.731 07:32:30 -- nvmf/common.sh@630 -- # modprobe nvmet 00:26:28.731 07:32:30 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:28.731 07:32:30 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:28.990 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:28.990 Waiting for block devices as requested 00:26:29.248 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:29.248 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:29.248 07:32:30 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:29.248 07:32:30 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:29.248 07:32:30 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:26:29.248 07:32:30 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:26:29.248 07:32:30 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:26:29.248 No valid GPT data, bailing 00:26:29.248 07:32:31 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:29.248 07:32:31 -- scripts/common.sh@393 -- # pt= 00:26:29.248 07:32:31 -- scripts/common.sh@394 -- # return 1 00:26:29.248 07:32:31 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:26:29.248 07:32:31 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:29.248 07:32:31 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:29.248 07:32:31 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:26:29.248 07:32:31 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:26:29.248 07:32:31 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:26:29.506 No valid GPT data, bailing 00:26:29.506 07:32:31 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:29.506 07:32:31 -- scripts/common.sh@393 -- # pt= 00:26:29.506 07:32:31 -- scripts/common.sh@394 -- # return 1 00:26:29.506 07:32:31 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:26:29.506 07:32:31 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:29.506 07:32:31 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:26:29.506 07:32:31 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:26:29.506 07:32:31 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:26:29.506 07:32:31 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:26:29.506 No valid GPT data, bailing 00:26:29.506 07:32:31 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:26:29.506 07:32:31 -- scripts/common.sh@393 -- # pt= 00:26:29.506 07:32:31 -- scripts/common.sh@394 -- # return 1 00:26:29.506 07:32:31 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:26:29.506 07:32:31 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:29.506 07:32:31 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:26:29.506 07:32:31 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:26:29.506 07:32:31 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:26:29.506 07:32:31 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:26:29.506 No valid GPT data, bailing 00:26:29.506 07:32:31 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:26:29.506 07:32:31 -- scripts/common.sh@393 -- # pt= 00:26:29.506 07:32:31 -- scripts/common.sh@394 -- # return 1 00:26:29.506 07:32:31 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:26:29.506 07:32:31 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:26:29.506 07:32:31 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:26:29.506 07:32:31 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:29.506 07:32:31 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:29.506 07:32:31 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:26:29.506 07:32:31 -- nvmf/common.sh@654 -- # echo 1 00:26:29.506 07:32:31 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:26:29.506 07:32:31 -- nvmf/common.sh@656 -- # echo 1 00:26:29.506 07:32:31 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:26:29.506 07:32:31 -- nvmf/common.sh@663 -- # echo tcp 00:26:29.506 07:32:31 -- nvmf/common.sh@664 -- # echo 4420 00:26:29.506 07:32:31 -- nvmf/common.sh@665 -- # echo ipv4 00:26:29.506 07:32:31 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:29.506 07:32:31 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a --hostid=4c98ac9e-fdc9-47e3-8332-4fa4080b4c3a -a 10.0.0.1 -t tcp -s 4420 00:26:29.765 00:26:29.765 Discovery Log Number of Records 2, Generation counter 2 00:26:29.765 =====Discovery Log Entry 0====== 00:26:29.765 trtype: tcp 00:26:29.765 adrfam: ipv4 00:26:29.765 subtype: current discovery subsystem 00:26:29.765 treq: not specified, sq flow control disable supported 00:26:29.765 portid: 1 00:26:29.765 trsvcid: 4420 00:26:29.765 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:29.765 traddr: 10.0.0.1 00:26:29.765 eflags: none 00:26:29.765 sectype: none 00:26:29.765 =====Discovery Log Entry 1====== 00:26:29.765 trtype: tcp 00:26:29.765 adrfam: ipv4 00:26:29.765 subtype: nvme subsystem 00:26:29.765 treq: not specified, sq flow control disable supported 00:26:29.765 portid: 1 00:26:29.765 trsvcid: 4420 00:26:29.765 subnqn: kernel_target 00:26:29.765 traddr: 10.0.0.1 00:26:29.765 eflags: none 00:26:29.765 sectype: none 00:26:29.765 07:32:31 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:26:29.765 07:32:31 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:29.765 07:32:31 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:29.765 07:32:31 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:26:29.765 07:32:31 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:29.765 07:32:31 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:26:29.765 07:32:31 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:29.765 07:32:31 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:29.765 07:32:31 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:29.765 07:32:31 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:29.765 07:32:31 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:29.765 07:32:31 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:29.765 07:32:31 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:29.765 07:32:31 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:29.765 07:32:31 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:26:29.765 07:32:31 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:29.765 07:32:31 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:26:29.765 07:32:31 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:29.765 07:32:31 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:29.765 07:32:31 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:29.765 07:32:31 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:33.052 Initializing NVMe Controllers 00:26:33.052 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:33.052 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:33.052 Initialization complete. Launching workers. 00:26:33.052 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 31678, failed: 0 00:26:33.052 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 31678, failed to submit 0 00:26:33.052 success 0, unsuccess 31678, failed 0 00:26:33.052 07:32:34 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:33.052 07:32:34 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:36.474 Initializing NVMe Controllers 00:26:36.474 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:36.474 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:36.474 Initialization complete. Launching workers. 00:26:36.474 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 66382, failed: 0 00:26:36.474 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 26962, failed to submit 39420 00:26:36.474 success 0, unsuccess 26962, failed 0 00:26:36.474 07:32:37 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:36.474 07:32:37 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:39.761 Initializing NVMe Controllers 00:26:39.761 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:39.761 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:39.761 Initialization complete. Launching workers. 00:26:39.761 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 71375, failed: 0 00:26:39.761 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 17830, failed to submit 53545 00:26:39.761 success 0, unsuccess 17830, failed 0 00:26:39.761 07:32:40 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:26:39.761 07:32:40 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:26:39.761 07:32:40 -- nvmf/common.sh@677 -- # echo 0 00:26:39.761 07:32:40 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:26:39.761 07:32:40 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:39.761 07:32:40 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:39.761 07:32:40 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:26:39.761 07:32:40 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:26:39.761 07:32:40 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:26:39.761 00:26:39.761 real 0m10.519s 00:26:39.761 user 0m5.329s 00:26:39.761 sys 0m2.473s 00:26:39.761 07:32:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:39.761 07:32:40 -- common/autotest_common.sh@10 -- # set +x 00:26:39.761 ************************************ 00:26:39.761 END TEST kernel_target_abort 00:26:39.761 ************************************ 00:26:39.761 07:32:40 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:26:39.761 07:32:40 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:26:39.761 07:32:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:39.762 07:32:40 -- nvmf/common.sh@116 -- # sync 00:26:39.762 07:32:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:39.762 07:32:41 -- nvmf/common.sh@119 -- # set +e 00:26:39.762 07:32:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:39.762 07:32:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:39.762 rmmod nvme_tcp 00:26:39.762 rmmod nvme_fabrics 00:26:39.762 rmmod nvme_keyring 00:26:39.762 07:32:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:39.762 07:32:41 -- nvmf/common.sh@123 -- # set -e 00:26:39.762 07:32:41 -- nvmf/common.sh@124 -- # return 0 00:26:39.762 07:32:41 -- nvmf/common.sh@477 -- # '[' -n 103034 ']' 00:26:39.762 07:32:41 -- nvmf/common.sh@478 -- # killprocess 103034 00:26:39.762 07:32:41 -- common/autotest_common.sh@926 -- # '[' -z 103034 ']' 00:26:39.762 07:32:41 -- common/autotest_common.sh@930 -- # kill -0 103034 00:26:39.762 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (103034) - No such process 00:26:39.762 Process with pid 103034 is not found 00:26:39.762 07:32:41 -- common/autotest_common.sh@953 -- # echo 'Process with pid 103034 is not found' 00:26:39.762 07:32:41 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:26:39.762 07:32:41 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:40.020 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:40.020 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:26:40.279 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:26:40.279 07:32:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:40.279 07:32:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:40.279 07:32:41 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:40.279 07:32:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:40.279 07:32:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.279 07:32:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:40.279 07:32:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.279 07:32:41 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:40.279 00:26:40.279 real 0m24.791s 00:26:40.279 user 0m50.419s 00:26:40.279 sys 0m5.661s 00:26:40.279 07:32:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:40.279 07:32:41 -- common/autotest_common.sh@10 -- # set +x 00:26:40.279 ************************************ 00:26:40.279 END TEST nvmf_abort_qd_sizes 00:26:40.279 ************************************ 00:26:40.279 07:32:41 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:26:40.279 07:32:41 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:26:40.279 07:32:41 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:26:40.279 07:32:41 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:26:40.279 07:32:41 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:26:40.279 07:32:41 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:26:40.279 07:32:41 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:26:40.279 07:32:41 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:26:40.279 07:32:41 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:26:40.279 07:32:41 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:26:40.279 07:32:41 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:26:40.279 07:32:41 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:26:40.279 07:32:41 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:26:40.279 07:32:41 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:26:40.279 07:32:41 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:26:40.279 07:32:41 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:26:40.279 07:32:41 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:26:40.279 07:32:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:40.279 07:32:41 -- common/autotest_common.sh@10 -- # set +x 00:26:40.279 07:32:41 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:26:40.279 07:32:41 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:26:40.279 07:32:41 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:26:40.279 07:32:41 -- common/autotest_common.sh@10 -- # set +x 00:26:42.183 INFO: APP EXITING 00:26:42.183 INFO: killing all VMs 00:26:42.183 INFO: killing vhost app 00:26:42.183 INFO: EXIT DONE 00:26:42.750 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:42.750 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:26:42.750 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:26:43.686 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:43.686 Cleaning 00:26:43.686 Removing: /var/run/dpdk/spdk0/config 00:26:43.686 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:26:43.686 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:26:43.686 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:26:43.686 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:26:43.686 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:26:43.686 Removing: /var/run/dpdk/spdk0/hugepage_info 00:26:43.686 Removing: /var/run/dpdk/spdk1/config 00:26:43.686 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:26:43.686 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:26:43.686 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:26:43.686 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:26:43.686 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:26:43.686 Removing: /var/run/dpdk/spdk1/hugepage_info 00:26:43.686 Removing: /var/run/dpdk/spdk2/config 00:26:43.686 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:26:43.686 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:26:43.686 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:26:43.686 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:26:43.686 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:26:43.686 Removing: /var/run/dpdk/spdk2/hugepage_info 00:26:43.686 Removing: /var/run/dpdk/spdk3/config 00:26:43.686 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:26:43.686 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:26:43.686 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:26:43.686 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:26:43.686 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:26:43.686 Removing: /var/run/dpdk/spdk3/hugepage_info 00:26:43.686 Removing: /var/run/dpdk/spdk4/config 00:26:43.686 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:26:43.686 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:26:43.686 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:26:43.686 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:26:43.686 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:26:43.686 Removing: /var/run/dpdk/spdk4/hugepage_info 00:26:43.686 Removing: /dev/shm/nvmf_trace.0 00:26:43.686 Removing: /dev/shm/spdk_tgt_trace.pid67617 00:26:43.686 Removing: /var/run/dpdk/spdk0 00:26:43.686 Removing: /var/run/dpdk/spdk1 00:26:43.686 Removing: /var/run/dpdk/spdk2 00:26:43.686 Removing: /var/run/dpdk/spdk3 00:26:43.686 Removing: /var/run/dpdk/spdk4 00:26:43.686 Removing: /var/run/dpdk/spdk_pid100051 00:26:43.686 Removing: /var/run/dpdk/spdk_pid100252 00:26:43.686 Removing: /var/run/dpdk/spdk_pid100537 00:26:43.686 Removing: /var/run/dpdk/spdk_pid100836 00:26:43.686 Removing: /var/run/dpdk/spdk_pid101378 00:26:43.686 Removing: /var/run/dpdk/spdk_pid101383 00:26:43.686 Removing: /var/run/dpdk/spdk_pid101742 00:26:43.686 Removing: /var/run/dpdk/spdk_pid101902 00:26:43.686 Removing: /var/run/dpdk/spdk_pid102065 00:26:43.686 Removing: /var/run/dpdk/spdk_pid102163 00:26:43.686 Removing: /var/run/dpdk/spdk_pid102318 00:26:43.686 Removing: /var/run/dpdk/spdk_pid102427 00:26:43.686 Removing: /var/run/dpdk/spdk_pid103103 00:26:43.686 Removing: /var/run/dpdk/spdk_pid103138 00:26:43.686 Removing: /var/run/dpdk/spdk_pid103172 00:26:43.945 Removing: /var/run/dpdk/spdk_pid103422 00:26:43.945 Removing: /var/run/dpdk/spdk_pid103453 00:26:43.945 Removing: /var/run/dpdk/spdk_pid103493 00:26:43.945 Removing: /var/run/dpdk/spdk_pid67473 00:26:43.945 Removing: /var/run/dpdk/spdk_pid67617 00:26:43.945 Removing: /var/run/dpdk/spdk_pid67923 00:26:43.945 Removing: /var/run/dpdk/spdk_pid68192 00:26:43.945 Removing: /var/run/dpdk/spdk_pid68367 00:26:43.945 Removing: /var/run/dpdk/spdk_pid68442 00:26:43.945 Removing: /var/run/dpdk/spdk_pid68528 00:26:43.945 Removing: /var/run/dpdk/spdk_pid68622 00:26:43.945 Removing: /var/run/dpdk/spdk_pid68655 00:26:43.945 Removing: /var/run/dpdk/spdk_pid68696 00:26:43.945 Removing: /var/run/dpdk/spdk_pid68751 00:26:43.945 Removing: /var/run/dpdk/spdk_pid68855 00:26:43.945 Removing: /var/run/dpdk/spdk_pid69473 00:26:43.945 Removing: /var/run/dpdk/spdk_pid69532 00:26:43.945 Removing: /var/run/dpdk/spdk_pid69601 00:26:43.945 Removing: /var/run/dpdk/spdk_pid69629 00:26:43.945 Removing: /var/run/dpdk/spdk_pid69708 00:26:43.945 Removing: /var/run/dpdk/spdk_pid69736 00:26:43.945 Removing: /var/run/dpdk/spdk_pid69815 00:26:43.945 Removing: /var/run/dpdk/spdk_pid69843 00:26:43.945 Removing: /var/run/dpdk/spdk_pid69888 00:26:43.945 Removing: /var/run/dpdk/spdk_pid69918 00:26:43.945 Removing: /var/run/dpdk/spdk_pid69975 00:26:43.945 Removing: /var/run/dpdk/spdk_pid70005 00:26:43.945 Removing: /var/run/dpdk/spdk_pid70145 00:26:43.945 Removing: /var/run/dpdk/spdk_pid70186 00:26:43.945 Removing: /var/run/dpdk/spdk_pid70254 00:26:43.945 Removing: /var/run/dpdk/spdk_pid70329 00:26:43.945 Removing: /var/run/dpdk/spdk_pid70348 00:26:43.945 Removing: /var/run/dpdk/spdk_pid70412 00:26:43.945 Removing: /var/run/dpdk/spdk_pid70428 00:26:43.945 Removing: /var/run/dpdk/spdk_pid70468 00:26:43.945 Removing: /var/run/dpdk/spdk_pid70482 00:26:43.945 Removing: /var/run/dpdk/spdk_pid70515 00:26:43.945 Removing: /var/run/dpdk/spdk_pid70536 00:26:43.945 Removing: /var/run/dpdk/spdk_pid70565 00:26:43.945 Removing: /var/run/dpdk/spdk_pid70585 00:26:43.945 Removing: /var/run/dpdk/spdk_pid70619 00:26:43.945 Removing: /var/run/dpdk/spdk_pid70633 00:26:43.945 Removing: /var/run/dpdk/spdk_pid70672 00:26:43.945 Removing: /var/run/dpdk/spdk_pid70689 00:26:43.945 Removing: /var/run/dpdk/spdk_pid70724 00:26:43.945 Removing: /var/run/dpdk/spdk_pid70743 00:26:43.945 Removing: /var/run/dpdk/spdk_pid70772 00:26:43.945 Removing: /var/run/dpdk/spdk_pid70792 00:26:43.945 Removing: /var/run/dpdk/spdk_pid70827 00:26:43.945 Removing: /var/run/dpdk/spdk_pid70847 00:26:43.945 Removing: /var/run/dpdk/spdk_pid70876 00:26:43.945 Removing: /var/run/dpdk/spdk_pid70895 00:26:43.945 Removing: /var/run/dpdk/spdk_pid70930 00:26:43.945 Removing: /var/run/dpdk/spdk_pid70944 00:26:43.945 Removing: /var/run/dpdk/spdk_pid70984 00:26:43.945 Removing: /var/run/dpdk/spdk_pid70998 00:26:43.945 Removing: /var/run/dpdk/spdk_pid71031 00:26:43.945 Removing: /var/run/dpdk/spdk_pid71052 00:26:43.945 Removing: /var/run/dpdk/spdk_pid71081 00:26:43.945 Removing: /var/run/dpdk/spdk_pid71106 00:26:43.945 Removing: /var/run/dpdk/spdk_pid71135 00:26:43.945 Removing: /var/run/dpdk/spdk_pid71149 00:26:43.945 Removing: /var/run/dpdk/spdk_pid71189 00:26:43.945 Removing: /var/run/dpdk/spdk_pid71205 00:26:43.945 Removing: /var/run/dpdk/spdk_pid71240 00:26:43.945 Removing: /var/run/dpdk/spdk_pid71262 00:26:43.945 Removing: /var/run/dpdk/spdk_pid71294 00:26:43.945 Removing: /var/run/dpdk/spdk_pid71317 00:26:43.945 Removing: /var/run/dpdk/spdk_pid71354 00:26:43.945 Removing: /var/run/dpdk/spdk_pid71374 00:26:43.945 Removing: /var/run/dpdk/spdk_pid71408 00:26:43.945 Removing: /var/run/dpdk/spdk_pid71428 00:26:43.945 Removing: /var/run/dpdk/spdk_pid71458 00:26:43.945 Removing: /var/run/dpdk/spdk_pid71527 00:26:43.945 Removing: /var/run/dpdk/spdk_pid71632 00:26:43.946 Removing: /var/run/dpdk/spdk_pid72047 00:26:43.946 Removing: /var/run/dpdk/spdk_pid78943 00:26:43.946 Removing: /var/run/dpdk/spdk_pid79289 00:26:44.204 Removing: /var/run/dpdk/spdk_pid81708 00:26:44.204 Removing: /var/run/dpdk/spdk_pid82078 00:26:44.204 Removing: /var/run/dpdk/spdk_pid82341 00:26:44.204 Removing: /var/run/dpdk/spdk_pid82387 00:26:44.204 Removing: /var/run/dpdk/spdk_pid82694 00:26:44.204 Removing: /var/run/dpdk/spdk_pid82747 00:26:44.204 Removing: /var/run/dpdk/spdk_pid83118 00:26:44.204 Removing: /var/run/dpdk/spdk_pid83641 00:26:44.204 Removing: /var/run/dpdk/spdk_pid84079 00:26:44.204 Removing: /var/run/dpdk/spdk_pid84992 00:26:44.204 Removing: /var/run/dpdk/spdk_pid85965 00:26:44.204 Removing: /var/run/dpdk/spdk_pid86083 00:26:44.204 Removing: /var/run/dpdk/spdk_pid86147 00:26:44.204 Removing: /var/run/dpdk/spdk_pid87609 00:26:44.204 Removing: /var/run/dpdk/spdk_pid87845 00:26:44.204 Removing: /var/run/dpdk/spdk_pid88298 00:26:44.204 Removing: /var/run/dpdk/spdk_pid88407 00:26:44.204 Removing: /var/run/dpdk/spdk_pid88556 00:26:44.204 Removing: /var/run/dpdk/spdk_pid88603 00:26:44.204 Removing: /var/run/dpdk/spdk_pid88643 00:26:44.204 Removing: /var/run/dpdk/spdk_pid88694 00:26:44.204 Removing: /var/run/dpdk/spdk_pid88857 00:26:44.204 Removing: /var/run/dpdk/spdk_pid89005 00:26:44.204 Removing: /var/run/dpdk/spdk_pid89269 00:26:44.204 Removing: /var/run/dpdk/spdk_pid89389 00:26:44.204 Removing: /var/run/dpdk/spdk_pid89805 00:26:44.204 Removing: /var/run/dpdk/spdk_pid90196 00:26:44.204 Removing: /var/run/dpdk/spdk_pid90198 00:26:44.204 Removing: /var/run/dpdk/spdk_pid92447 00:26:44.204 Removing: /var/run/dpdk/spdk_pid92757 00:26:44.204 Removing: /var/run/dpdk/spdk_pid93242 00:26:44.204 Removing: /var/run/dpdk/spdk_pid93254 00:26:44.204 Removing: /var/run/dpdk/spdk_pid93583 00:26:44.204 Removing: /var/run/dpdk/spdk_pid93607 00:26:44.204 Removing: /var/run/dpdk/spdk_pid93622 00:26:44.204 Removing: /var/run/dpdk/spdk_pid93647 00:26:44.204 Removing: /var/run/dpdk/spdk_pid93660 00:26:44.204 Removing: /var/run/dpdk/spdk_pid93798 00:26:44.204 Removing: /var/run/dpdk/spdk_pid93805 00:26:44.204 Removing: /var/run/dpdk/spdk_pid93908 00:26:44.204 Removing: /var/run/dpdk/spdk_pid93920 00:26:44.204 Removing: /var/run/dpdk/spdk_pid94024 00:26:44.204 Removing: /var/run/dpdk/spdk_pid94026 00:26:44.204 Removing: /var/run/dpdk/spdk_pid94501 00:26:44.204 Removing: /var/run/dpdk/spdk_pid94544 00:26:44.204 Removing: /var/run/dpdk/spdk_pid94701 00:26:44.204 Removing: /var/run/dpdk/spdk_pid94818 00:26:44.204 Removing: /var/run/dpdk/spdk_pid95210 00:26:44.204 Removing: /var/run/dpdk/spdk_pid95465 00:26:44.204 Removing: /var/run/dpdk/spdk_pid95947 00:26:44.204 Removing: /var/run/dpdk/spdk_pid96503 00:26:44.204 Removing: /var/run/dpdk/spdk_pid96965 00:26:44.204 Removing: /var/run/dpdk/spdk_pid97061 00:26:44.204 Removing: /var/run/dpdk/spdk_pid97146 00:26:44.204 Removing: /var/run/dpdk/spdk_pid97218 00:26:44.204 Removing: /var/run/dpdk/spdk_pid97381 00:26:44.204 Removing: /var/run/dpdk/spdk_pid97471 00:26:44.204 Removing: /var/run/dpdk/spdk_pid97557 00:26:44.204 Removing: /var/run/dpdk/spdk_pid97653 00:26:44.204 Removing: /var/run/dpdk/spdk_pid97977 00:26:44.204 Removing: /var/run/dpdk/spdk_pid98680 00:26:44.204 Clean 00:26:44.463 killing process with pid 61829 00:26:44.463 killing process with pid 61832 00:26:44.463 07:32:46 -- common/autotest_common.sh@1436 -- # return 0 00:26:44.463 07:32:46 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:26:44.463 07:32:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:44.463 07:32:46 -- common/autotest_common.sh@10 -- # set +x 00:26:44.463 07:32:46 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:26:44.463 07:32:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:44.463 07:32:46 -- common/autotest_common.sh@10 -- # set +x 00:26:44.463 07:32:46 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:44.464 07:32:46 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:26:44.464 07:32:46 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:26:44.464 07:32:46 -- spdk/autotest.sh@394 -- # hash lcov 00:26:44.464 07:32:46 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:26:44.464 07:32:46 -- spdk/autotest.sh@396 -- # hostname 00:26:44.464 07:32:46 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:26:44.722 geninfo: WARNING: invalid characters removed from testname! 00:27:06.652 07:33:06 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:07.590 07:33:09 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:10.124 07:33:11 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:12.070 07:33:13 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:13.974 07:33:15 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:16.506 07:33:17 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:18.408 07:33:19 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:18.408 07:33:20 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:18.408 07:33:20 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:18.408 07:33:20 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:18.408 07:33:20 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:18.408 07:33:20 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.408 07:33:20 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.408 07:33:20 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.408 07:33:20 -- paths/export.sh@5 -- $ export PATH 00:27:18.408 07:33:20 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.408 07:33:20 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:27:18.408 07:33:20 -- common/autobuild_common.sh@440 -- $ date +%s 00:27:18.408 07:33:20 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1730705600.XXXXXX 00:27:18.408 07:33:20 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1730705600.J9g95U 00:27:18.408 07:33:20 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:27:18.408 07:33:20 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:27:18.408 07:33:20 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:27:18.408 07:33:20 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:27:18.408 07:33:20 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:27:18.408 07:33:20 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:27:18.408 07:33:20 -- common/autobuild_common.sh@456 -- $ get_config_params 00:27:18.408 07:33:20 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:27:18.408 07:33:20 -- common/autotest_common.sh@10 -- $ set +x 00:27:18.408 07:33:20 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:27:18.409 07:33:20 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:27:18.409 07:33:20 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:27:18.409 07:33:20 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:27:18.409 07:33:20 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:27:18.409 07:33:20 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:27:18.409 07:33:20 -- spdk/autopackage.sh@19 -- $ timing_finish 00:27:18.409 07:33:20 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:18.409 07:33:20 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:27:18.409 07:33:20 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:18.409 07:33:20 -- spdk/autopackage.sh@20 -- $ exit 0 00:27:18.409 + [[ -n 5964 ]] 00:27:18.409 + sudo kill 5964 00:27:18.677 [Pipeline] } 00:27:18.693 [Pipeline] // timeout 00:27:18.699 [Pipeline] } 00:27:18.713 [Pipeline] // stage 00:27:18.719 [Pipeline] } 00:27:18.735 [Pipeline] // catchError 00:27:18.745 [Pipeline] stage 00:27:18.747 [Pipeline] { (Stop VM) 00:27:18.761 [Pipeline] sh 00:27:19.042 + vagrant halt 00:27:22.338 ==> default: Halting domain... 00:27:28.915 [Pipeline] sh 00:27:29.194 + vagrant destroy -f 00:27:31.726 ==> default: Removing domain... 00:27:32.029 [Pipeline] sh 00:27:32.309 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:27:32.318 [Pipeline] } 00:27:32.332 [Pipeline] // stage 00:27:32.338 [Pipeline] } 00:27:32.351 [Pipeline] // dir 00:27:32.356 [Pipeline] } 00:27:32.371 [Pipeline] // wrap 00:27:32.377 [Pipeline] } 00:27:32.389 [Pipeline] // catchError 00:27:32.398 [Pipeline] stage 00:27:32.400 [Pipeline] { (Epilogue) 00:27:32.412 [Pipeline] sh 00:27:32.694 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:27:37.971 [Pipeline] catchError 00:27:37.973 [Pipeline] { 00:27:37.986 [Pipeline] sh 00:27:38.267 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:27:38.526 Artifacts sizes are good 00:27:38.534 [Pipeline] } 00:27:38.548 [Pipeline] // catchError 00:27:38.558 [Pipeline] archiveArtifacts 00:27:38.565 Archiving artifacts 00:27:38.683 [Pipeline] cleanWs 00:27:38.693 [WS-CLEANUP] Deleting project workspace... 00:27:38.693 [WS-CLEANUP] Deferred wipeout is used... 00:27:38.699 [WS-CLEANUP] done 00:27:38.701 [Pipeline] } 00:27:38.715 [Pipeline] // stage 00:27:38.720 [Pipeline] } 00:27:38.733 [Pipeline] // node 00:27:38.738 [Pipeline] End of Pipeline 00:27:38.786 Finished: SUCCESS